00:00:00.000 Started by upstream project "autotest-per-patch" build number 126234 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.075 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.076 The recommended git tool is: git 00:00:00.076 using credential 00000000-0000-0000-0000-000000000002 00:00:00.079 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.104 Fetching changes from the remote Git repository 00:00:00.107 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.144 Using shallow fetch with depth 1 00:00:00.144 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.144 > git --version # timeout=10 00:00:00.179 > git --version # 'git version 2.39.2' 00:00:00.179 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.207 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.207 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.204 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.216 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.229 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:04.229 > git config core.sparsecheckout # timeout=10 00:00:04.241 > git read-tree -mu HEAD # timeout=10 00:00:04.261 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:04.281 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:04.282 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:04.395 [Pipeline] Start of Pipeline 00:00:04.410 [Pipeline] library 00:00:04.411 Loading library shm_lib@master 00:00:04.411 Library shm_lib@master is cached. Copying from home. 00:00:04.429 [Pipeline] node 00:00:04.437 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.439 [Pipeline] { 00:00:04.450 [Pipeline] catchError 00:00:04.451 [Pipeline] { 00:00:04.461 [Pipeline] wrap 00:00:04.469 [Pipeline] { 00:00:04.474 [Pipeline] stage 00:00:04.476 [Pipeline] { (Prologue) 00:00:04.651 [Pipeline] sh 00:00:04.939 + logger -p user.info -t JENKINS-CI 00:00:04.959 [Pipeline] echo 00:00:04.961 Node: CYP12 00:00:04.968 [Pipeline] sh 00:00:05.294 [Pipeline] setCustomBuildProperty 00:00:05.305 [Pipeline] echo 00:00:05.307 Cleanup processes 00:00:05.313 [Pipeline] sh 00:00:05.597 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.597 1604131 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.608 [Pipeline] sh 00:00:05.890 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.890 ++ grep -v 'sudo pgrep' 00:00:05.890 ++ awk '{print $1}' 00:00:05.890 + sudo kill -9 00:00:05.890 + true 00:00:05.902 [Pipeline] cleanWs 00:00:05.909 [WS-CLEANUP] Deleting project workspace... 00:00:05.909 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.915 [WS-CLEANUP] done 00:00:05.919 [Pipeline] setCustomBuildProperty 00:00:05.930 [Pipeline] sh 00:00:06.225 + sudo git config --global --replace-all safe.directory '*' 00:00:06.288 [Pipeline] httpRequest 00:00:06.314 [Pipeline] echo 00:00:06.315 Sorcerer 10.211.164.101 is alive 00:00:06.322 [Pipeline] httpRequest 00:00:06.327 HttpMethod: GET 00:00:06.327 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.328 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.346 Response Code: HTTP/1.1 200 OK 00:00:06.347 Success: Status code 200 is in the accepted range: 200,404 00:00:06.347 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:15.282 [Pipeline] sh 00:00:15.568 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:15.586 [Pipeline] httpRequest 00:00:15.600 [Pipeline] echo 00:00:15.602 Sorcerer 10.211.164.101 is alive 00:00:15.612 [Pipeline] httpRequest 00:00:15.618 HttpMethod: GET 00:00:15.619 URL: http://10.211.164.101/packages/spdk_cdc37ee83b9008feb075db6e5f474e1ec08c5b9a.tar.gz 00:00:15.619 Sending request to url: http://10.211.164.101/packages/spdk_cdc37ee83b9008feb075db6e5f474e1ec08c5b9a.tar.gz 00:00:15.636 Response Code: HTTP/1.1 200 OK 00:00:15.636 Success: Status code 200 is in the accepted range: 200,404 00:00:15.637 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_cdc37ee83b9008feb075db6e5f474e1ec08c5b9a.tar.gz 00:00:47.368 [Pipeline] sh 00:00:47.653 + tar --no-same-owner -xf spdk_cdc37ee83b9008feb075db6e5f474e1ec08c5b9a.tar.gz 00:00:50.223 [Pipeline] sh 00:00:50.510 + git -C spdk log --oneline -n5 00:00:50.510 cdc37ee83 env_dpdk: deprecate spdk_env_opts_init and spdk_env_init 00:00:50.510 24018edd4 all: replace spdk_env_opts_init/spdk_env_init with _ext variant 00:00:50.510 3269bc4bc env: add spdk_env_opts_init_ext() 00:00:50.510 d9917142f env: pack and assert size for spdk_env_opts 00:00:50.510 1bd83e221 sock: add spdk_sock_get_numa_socket_id 00:00:50.524 [Pipeline] } 00:00:50.543 [Pipeline] // stage 00:00:50.553 [Pipeline] stage 00:00:50.555 [Pipeline] { (Prepare) 00:00:50.574 [Pipeline] writeFile 00:00:50.591 [Pipeline] sh 00:00:50.878 + logger -p user.info -t JENKINS-CI 00:00:50.891 [Pipeline] sh 00:00:51.180 + logger -p user.info -t JENKINS-CI 00:00:51.193 [Pipeline] sh 00:00:51.479 + cat autorun-spdk.conf 00:00:51.479 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.479 SPDK_TEST_NVMF=1 00:00:51.479 SPDK_TEST_NVME_CLI=1 00:00:51.479 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:51.479 SPDK_TEST_NVMF_NICS=e810 00:00:51.479 SPDK_TEST_VFIOUSER=1 00:00:51.479 SPDK_RUN_UBSAN=1 00:00:51.479 NET_TYPE=phy 00:00:51.486 RUN_NIGHTLY=0 00:00:51.491 [Pipeline] readFile 00:00:51.519 [Pipeline] withEnv 00:00:51.521 [Pipeline] { 00:00:51.536 [Pipeline] sh 00:00:51.821 + set -ex 00:00:51.821 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:51.821 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:51.821 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.821 ++ SPDK_TEST_NVMF=1 00:00:51.821 ++ SPDK_TEST_NVME_CLI=1 00:00:51.821 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:51.821 ++ SPDK_TEST_NVMF_NICS=e810 00:00:51.821 ++ SPDK_TEST_VFIOUSER=1 00:00:51.821 ++ SPDK_RUN_UBSAN=1 00:00:51.821 ++ NET_TYPE=phy 00:00:51.821 ++ RUN_NIGHTLY=0 00:00:51.821 + case $SPDK_TEST_NVMF_NICS in 00:00:51.821 + DRIVERS=ice 00:00:51.821 + [[ tcp == \r\d\m\a ]] 00:00:51.821 + [[ -n ice ]] 00:00:51.821 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:51.821 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:51.821 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:51.821 rmmod: ERROR: Module irdma is not currently loaded 00:00:51.821 rmmod: ERROR: Module i40iw is not currently loaded 00:00:51.821 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:51.821 + true 00:00:51.821 + for D in $DRIVERS 00:00:51.821 + sudo modprobe ice 00:00:51.821 + exit 0 00:00:51.831 [Pipeline] } 00:00:51.847 [Pipeline] // withEnv 00:00:51.852 [Pipeline] } 00:00:51.866 [Pipeline] // stage 00:00:51.875 [Pipeline] catchError 00:00:51.877 [Pipeline] { 00:00:51.890 [Pipeline] timeout 00:00:51.890 Timeout set to expire in 50 min 00:00:51.891 [Pipeline] { 00:00:51.903 [Pipeline] stage 00:00:51.904 [Pipeline] { (Tests) 00:00:51.939 [Pipeline] sh 00:00:52.235 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.235 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.235 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.235 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:52.235 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:52.235 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:52.235 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:52.235 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:52.235 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:52.235 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:52.235 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:52.235 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.235 + source /etc/os-release 00:00:52.235 ++ NAME='Fedora Linux' 00:00:52.235 ++ VERSION='38 (Cloud Edition)' 00:00:52.235 ++ ID=fedora 00:00:52.235 ++ VERSION_ID=38 00:00:52.235 ++ VERSION_CODENAME= 00:00:52.235 ++ PLATFORM_ID=platform:f38 00:00:52.235 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:52.235 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:52.235 ++ LOGO=fedora-logo-icon 00:00:52.235 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:52.235 ++ HOME_URL=https://fedoraproject.org/ 00:00:52.235 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:52.235 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:52.235 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:52.235 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:52.235 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:52.235 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:52.235 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:52.235 ++ SUPPORT_END=2024-05-14 00:00:52.235 ++ VARIANT='Cloud Edition' 00:00:52.235 ++ VARIANT_ID=cloud 00:00:52.235 + uname -a 00:00:52.235 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:52.235 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:55.533 Hugepages 00:00:55.533 node hugesize free / total 00:00:55.533 node0 1048576kB 0 / 0 00:00:55.533 node0 2048kB 0 / 0 00:00:55.533 node1 1048576kB 0 / 0 00:00:55.533 node1 2048kB 0 / 0 00:00:55.533 00:00:55.533 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:55.533 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:55.533 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:55.533 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:55.533 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:55.533 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:55.533 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:55.533 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:55.533 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:55.533 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:55.533 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:55.533 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:55.533 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:55.533 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:55.533 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:55.533 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:55.533 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:55.533 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:55.533 + rm -f /tmp/spdk-ld-path 00:00:55.794 + source autorun-spdk.conf 00:00:55.794 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.794 ++ SPDK_TEST_NVMF=1 00:00:55.794 ++ SPDK_TEST_NVME_CLI=1 00:00:55.794 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:55.794 ++ SPDK_TEST_NVMF_NICS=e810 00:00:55.794 ++ SPDK_TEST_VFIOUSER=1 00:00:55.794 ++ SPDK_RUN_UBSAN=1 00:00:55.794 ++ NET_TYPE=phy 00:00:55.794 ++ RUN_NIGHTLY=0 00:00:55.794 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:55.794 + [[ -n '' ]] 00:00:55.794 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:55.794 + for M in /var/spdk/build-*-manifest.txt 00:00:55.794 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:55.794 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:55.794 + for M in /var/spdk/build-*-manifest.txt 00:00:55.794 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:55.794 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:55.794 ++ uname 00:00:55.794 + [[ Linux == \L\i\n\u\x ]] 00:00:55.794 + sudo dmesg -T 00:00:55.794 + sudo dmesg --clear 00:00:55.794 + dmesg_pid=1605777 00:00:55.794 + [[ Fedora Linux == FreeBSD ]] 00:00:55.794 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:55.794 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:55.794 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:55.794 + [[ -x /usr/src/fio-static/fio ]] 00:00:55.794 + export FIO_BIN=/usr/src/fio-static/fio 00:00:55.794 + FIO_BIN=/usr/src/fio-static/fio 00:00:55.794 + sudo dmesg -Tw 00:00:55.794 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:55.794 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:55.794 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:55.794 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:55.794 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:55.794 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:55.794 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:55.794 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:55.794 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:55.794 Test configuration: 00:00:55.794 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.794 SPDK_TEST_NVMF=1 00:00:55.794 SPDK_TEST_NVME_CLI=1 00:00:55.794 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:55.794 SPDK_TEST_NVMF_NICS=e810 00:00:55.794 SPDK_TEST_VFIOUSER=1 00:00:55.794 SPDK_RUN_UBSAN=1 00:00:55.794 NET_TYPE=phy 00:00:55.794 RUN_NIGHTLY=0 20:51:23 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:55.794 20:51:23 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:55.794 20:51:23 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:55.794 20:51:23 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:55.794 20:51:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.794 20:51:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.794 20:51:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.794 20:51:23 -- paths/export.sh@5 -- $ export PATH 00:00:55.795 20:51:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.795 20:51:23 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:55.795 20:51:23 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:55.795 20:51:23 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721069483.XXXXXX 00:00:55.795 20:51:23 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721069483.hG9FZZ 00:00:55.795 20:51:23 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:55.795 20:51:23 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:55.795 20:51:23 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:55.795 20:51:23 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:55.795 20:51:23 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:55.795 20:51:23 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:55.795 20:51:23 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:55.795 20:51:23 -- common/autotest_common.sh@10 -- $ set +x 00:00:55.795 20:51:23 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:55.795 20:51:23 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:55.795 20:51:23 -- pm/common@17 -- $ local monitor 00:00:55.795 20:51:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:56.055 20:51:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:56.055 20:51:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:56.055 20:51:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:56.055 20:51:23 -- pm/common@21 -- $ date +%s 00:00:56.055 20:51:23 -- pm/common@25 -- $ sleep 1 00:00:56.055 20:51:23 -- pm/common@21 -- $ date +%s 00:00:56.055 20:51:23 -- pm/common@21 -- $ date +%s 00:00:56.055 20:51:23 -- pm/common@21 -- $ date +%s 00:00:56.055 20:51:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721069483 00:00:56.055 20:51:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721069483 00:00:56.055 20:51:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721069483 00:00:56.055 20:51:23 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721069483 00:00:56.055 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721069483_collect-vmstat.pm.log 00:00:56.055 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721069483_collect-cpu-load.pm.log 00:00:56.055 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721069483_collect-cpu-temp.pm.log 00:00:56.055 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721069483_collect-bmc-pm.bmc.pm.log 00:00:57.008 20:51:24 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:57.008 20:51:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:57.008 20:51:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:57.008 20:51:24 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:57.008 20:51:24 -- spdk/autobuild.sh@16 -- $ date -u 00:00:57.008 Mon Jul 15 06:51:24 PM UTC 2024 00:00:57.008 20:51:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:57.008 v24.09-pre-226-gcdc37ee83 00:00:57.008 20:51:24 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:57.008 20:51:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:57.008 20:51:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:57.008 20:51:24 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:57.008 20:51:24 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:57.008 20:51:24 -- common/autotest_common.sh@10 -- $ set +x 00:00:57.008 ************************************ 00:00:57.008 START TEST ubsan 00:00:57.008 ************************************ 00:00:57.008 20:51:24 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:57.008 using ubsan 00:00:57.008 00:00:57.008 real 0m0.001s 00:00:57.008 user 0m0.000s 00:00:57.008 sys 0m0.000s 00:00:57.008 20:51:24 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:57.008 20:51:24 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:57.008 ************************************ 00:00:57.008 END TEST ubsan 00:00:57.008 ************************************ 00:00:57.008 20:51:24 -- common/autotest_common.sh@1142 -- $ return 0 00:00:57.008 20:51:24 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:57.008 20:51:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:57.008 20:51:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:57.008 20:51:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:57.008 20:51:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:57.008 20:51:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:57.008 20:51:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:57.008 20:51:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:57.008 20:51:24 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:57.268 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:57.268 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:57.528 Using 'verbs' RDMA provider 00:01:13.374 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:25.607 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:25.607 Creating mk/config.mk...done. 00:01:25.607 Creating mk/cc.flags.mk...done. 00:01:25.607 Type 'make' to build. 00:01:25.607 20:51:52 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:25.607 20:51:52 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:25.607 20:51:52 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:25.607 20:51:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.607 ************************************ 00:01:25.607 START TEST make 00:01:25.607 ************************************ 00:01:25.607 20:51:52 make -- common/autotest_common.sh@1123 -- $ make -j144 00:01:25.607 make[1]: Nothing to be done for 'all'. 00:01:26.548 The Meson build system 00:01:26.548 Version: 1.3.1 00:01:26.548 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:26.548 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:26.548 Build type: native build 00:01:26.548 Project name: libvfio-user 00:01:26.548 Project version: 0.0.1 00:01:26.548 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:26.548 C linker for the host machine: cc ld.bfd 2.39-16 00:01:26.548 Host machine cpu family: x86_64 00:01:26.548 Host machine cpu: x86_64 00:01:26.548 Run-time dependency threads found: YES 00:01:26.548 Library dl found: YES 00:01:26.548 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:26.548 Run-time dependency json-c found: YES 0.17 00:01:26.548 Run-time dependency cmocka found: YES 1.1.7 00:01:26.548 Program pytest-3 found: NO 00:01:26.548 Program flake8 found: NO 00:01:26.548 Program misspell-fixer found: NO 00:01:26.548 Program restructuredtext-lint found: NO 00:01:26.548 Program valgrind found: YES (/usr/bin/valgrind) 00:01:26.548 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:26.548 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:26.548 Compiler for C supports arguments -Wwrite-strings: YES 00:01:26.548 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:26.548 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:26.548 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:26.548 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:26.548 Build targets in project: 8 00:01:26.548 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:26.548 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:26.548 00:01:26.548 libvfio-user 0.0.1 00:01:26.548 00:01:26.548 User defined options 00:01:26.548 buildtype : debug 00:01:26.548 default_library: shared 00:01:26.548 libdir : /usr/local/lib 00:01:26.548 00:01:26.548 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:27.116 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:27.116 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:27.116 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:27.116 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:27.116 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:27.116 [5/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:27.116 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:27.116 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:27.116 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:27.116 [9/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:27.116 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:27.116 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:27.116 [12/37] Compiling C object samples/null.p/null.c.o 00:01:27.116 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:27.116 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:27.116 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:27.116 [16/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:27.116 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:27.116 [18/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:27.116 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:27.116 [20/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:27.116 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:27.116 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:27.116 [23/37] Compiling C object samples/server.p/server.c.o 00:01:27.116 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:27.116 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:27.116 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:27.116 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:27.116 [28/37] Compiling C object samples/client.p/client.c.o 00:01:27.376 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:27.376 [30/37] Linking target test/unit_tests 00:01:27.376 [31/37] Linking target samples/client 00:01:27.376 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:27.376 [33/37] Linking target samples/server 00:01:27.376 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:27.376 [35/37] Linking target samples/null 00:01:27.376 [36/37] Linking target samples/lspci 00:01:27.376 [37/37] Linking target samples/gpio-pci-idio-16 00:01:27.376 INFO: autodetecting backend as ninja 00:01:27.376 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:27.376 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:27.635 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:27.635 ninja: no work to do. 00:01:34.293 The Meson build system 00:01:34.293 Version: 1.3.1 00:01:34.293 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:34.293 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:34.293 Build type: native build 00:01:34.293 Program cat found: YES (/usr/bin/cat) 00:01:34.293 Project name: DPDK 00:01:34.293 Project version: 24.03.0 00:01:34.293 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:34.293 C linker for the host machine: cc ld.bfd 2.39-16 00:01:34.293 Host machine cpu family: x86_64 00:01:34.293 Host machine cpu: x86_64 00:01:34.293 Message: ## Building in Developer Mode ## 00:01:34.293 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:34.293 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:34.293 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:34.293 Program python3 found: YES (/usr/bin/python3) 00:01:34.293 Program cat found: YES (/usr/bin/cat) 00:01:34.293 Compiler for C supports arguments -march=native: YES 00:01:34.293 Checking for size of "void *" : 8 00:01:34.293 Checking for size of "void *" : 8 (cached) 00:01:34.293 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:34.293 Library m found: YES 00:01:34.293 Library numa found: YES 00:01:34.293 Has header "numaif.h" : YES 00:01:34.293 Library fdt found: NO 00:01:34.293 Library execinfo found: NO 00:01:34.293 Has header "execinfo.h" : YES 00:01:34.293 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:34.293 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:34.293 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:34.293 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:34.293 Run-time dependency openssl found: YES 3.0.9 00:01:34.293 Run-time dependency libpcap found: YES 1.10.4 00:01:34.293 Has header "pcap.h" with dependency libpcap: YES 00:01:34.293 Compiler for C supports arguments -Wcast-qual: YES 00:01:34.293 Compiler for C supports arguments -Wdeprecated: YES 00:01:34.293 Compiler for C supports arguments -Wformat: YES 00:01:34.293 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:34.293 Compiler for C supports arguments -Wformat-security: NO 00:01:34.293 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:34.293 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:34.293 Compiler for C supports arguments -Wnested-externs: YES 00:01:34.293 Compiler for C supports arguments -Wold-style-definition: YES 00:01:34.293 Compiler for C supports arguments -Wpointer-arith: YES 00:01:34.293 Compiler for C supports arguments -Wsign-compare: YES 00:01:34.293 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:34.293 Compiler for C supports arguments -Wundef: YES 00:01:34.293 Compiler for C supports arguments -Wwrite-strings: YES 00:01:34.293 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:34.293 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:34.293 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:34.293 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:34.293 Program objdump found: YES (/usr/bin/objdump) 00:01:34.293 Compiler for C supports arguments -mavx512f: YES 00:01:34.293 Checking if "AVX512 checking" compiles: YES 00:01:34.293 Fetching value of define "__SSE4_2__" : 1 00:01:34.293 Fetching value of define "__AES__" : 1 00:01:34.293 Fetching value of define "__AVX__" : 1 00:01:34.293 Fetching value of define "__AVX2__" : 1 00:01:34.293 Fetching value of define "__AVX512BW__" : 1 00:01:34.293 Fetching value of define "__AVX512CD__" : 1 00:01:34.294 Fetching value of define "__AVX512DQ__" : 1 00:01:34.294 Fetching value of define "__AVX512F__" : 1 00:01:34.294 Fetching value of define "__AVX512VL__" : 1 00:01:34.294 Fetching value of define "__PCLMUL__" : 1 00:01:34.294 Fetching value of define "__RDRND__" : 1 00:01:34.294 Fetching value of define "__RDSEED__" : 1 00:01:34.294 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:34.294 Fetching value of define "__znver1__" : (undefined) 00:01:34.294 Fetching value of define "__znver2__" : (undefined) 00:01:34.294 Fetching value of define "__znver3__" : (undefined) 00:01:34.294 Fetching value of define "__znver4__" : (undefined) 00:01:34.294 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:34.294 Message: lib/log: Defining dependency "log" 00:01:34.294 Message: lib/kvargs: Defining dependency "kvargs" 00:01:34.294 Message: lib/telemetry: Defining dependency "telemetry" 00:01:34.294 Checking for function "getentropy" : NO 00:01:34.294 Message: lib/eal: Defining dependency "eal" 00:01:34.294 Message: lib/ring: Defining dependency "ring" 00:01:34.294 Message: lib/rcu: Defining dependency "rcu" 00:01:34.294 Message: lib/mempool: Defining dependency "mempool" 00:01:34.294 Message: lib/mbuf: Defining dependency "mbuf" 00:01:34.294 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:34.294 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:34.294 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:34.294 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:34.294 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:34.294 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:34.294 Compiler for C supports arguments -mpclmul: YES 00:01:34.294 Compiler for C supports arguments -maes: YES 00:01:34.294 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:34.294 Compiler for C supports arguments -mavx512bw: YES 00:01:34.294 Compiler for C supports arguments -mavx512dq: YES 00:01:34.294 Compiler for C supports arguments -mavx512vl: YES 00:01:34.294 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:34.294 Compiler for C supports arguments -mavx2: YES 00:01:34.294 Compiler for C supports arguments -mavx: YES 00:01:34.294 Message: lib/net: Defining dependency "net" 00:01:34.294 Message: lib/meter: Defining dependency "meter" 00:01:34.294 Message: lib/ethdev: Defining dependency "ethdev" 00:01:34.294 Message: lib/pci: Defining dependency "pci" 00:01:34.294 Message: lib/cmdline: Defining dependency "cmdline" 00:01:34.294 Message: lib/hash: Defining dependency "hash" 00:01:34.294 Message: lib/timer: Defining dependency "timer" 00:01:34.294 Message: lib/compressdev: Defining dependency "compressdev" 00:01:34.294 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:34.294 Message: lib/dmadev: Defining dependency "dmadev" 00:01:34.294 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:34.294 Message: lib/power: Defining dependency "power" 00:01:34.294 Message: lib/reorder: Defining dependency "reorder" 00:01:34.294 Message: lib/security: Defining dependency "security" 00:01:34.294 Has header "linux/userfaultfd.h" : YES 00:01:34.294 Has header "linux/vduse.h" : YES 00:01:34.294 Message: lib/vhost: Defining dependency "vhost" 00:01:34.294 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:34.294 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:34.294 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:34.294 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:34.294 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:34.294 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:34.294 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:34.294 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:34.294 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:34.294 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:34.294 Program doxygen found: YES (/usr/bin/doxygen) 00:01:34.294 Configuring doxy-api-html.conf using configuration 00:01:34.294 Configuring doxy-api-man.conf using configuration 00:01:34.294 Program mandb found: YES (/usr/bin/mandb) 00:01:34.294 Program sphinx-build found: NO 00:01:34.294 Configuring rte_build_config.h using configuration 00:01:34.294 Message: 00:01:34.294 ================= 00:01:34.294 Applications Enabled 00:01:34.294 ================= 00:01:34.294 00:01:34.294 apps: 00:01:34.294 00:01:34.294 00:01:34.294 Message: 00:01:34.294 ================= 00:01:34.294 Libraries Enabled 00:01:34.294 ================= 00:01:34.294 00:01:34.294 libs: 00:01:34.294 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:34.294 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:34.294 cryptodev, dmadev, power, reorder, security, vhost, 00:01:34.294 00:01:34.294 Message: 00:01:34.294 =============== 00:01:34.294 Drivers Enabled 00:01:34.294 =============== 00:01:34.294 00:01:34.294 common: 00:01:34.294 00:01:34.294 bus: 00:01:34.294 pci, vdev, 00:01:34.294 mempool: 00:01:34.294 ring, 00:01:34.294 dma: 00:01:34.294 00:01:34.294 net: 00:01:34.294 00:01:34.294 crypto: 00:01:34.294 00:01:34.294 compress: 00:01:34.294 00:01:34.294 vdpa: 00:01:34.294 00:01:34.294 00:01:34.294 Message: 00:01:34.294 ================= 00:01:34.294 Content Skipped 00:01:34.294 ================= 00:01:34.294 00:01:34.294 apps: 00:01:34.294 dumpcap: explicitly disabled via build config 00:01:34.294 graph: explicitly disabled via build config 00:01:34.294 pdump: explicitly disabled via build config 00:01:34.294 proc-info: explicitly disabled via build config 00:01:34.294 test-acl: explicitly disabled via build config 00:01:34.294 test-bbdev: explicitly disabled via build config 00:01:34.294 test-cmdline: explicitly disabled via build config 00:01:34.294 test-compress-perf: explicitly disabled via build config 00:01:34.294 test-crypto-perf: explicitly disabled via build config 00:01:34.294 test-dma-perf: explicitly disabled via build config 00:01:34.294 test-eventdev: explicitly disabled via build config 00:01:34.294 test-fib: explicitly disabled via build config 00:01:34.294 test-flow-perf: explicitly disabled via build config 00:01:34.294 test-gpudev: explicitly disabled via build config 00:01:34.294 test-mldev: explicitly disabled via build config 00:01:34.294 test-pipeline: explicitly disabled via build config 00:01:34.294 test-pmd: explicitly disabled via build config 00:01:34.294 test-regex: explicitly disabled via build config 00:01:34.294 test-sad: explicitly disabled via build config 00:01:34.294 test-security-perf: explicitly disabled via build config 00:01:34.294 00:01:34.294 libs: 00:01:34.294 argparse: explicitly disabled via build config 00:01:34.294 metrics: explicitly disabled via build config 00:01:34.294 acl: explicitly disabled via build config 00:01:34.294 bbdev: explicitly disabled via build config 00:01:34.294 bitratestats: explicitly disabled via build config 00:01:34.294 bpf: explicitly disabled via build config 00:01:34.294 cfgfile: explicitly disabled via build config 00:01:34.294 distributor: explicitly disabled via build config 00:01:34.294 efd: explicitly disabled via build config 00:01:34.294 eventdev: explicitly disabled via build config 00:01:34.294 dispatcher: explicitly disabled via build config 00:01:34.294 gpudev: explicitly disabled via build config 00:01:34.294 gro: explicitly disabled via build config 00:01:34.294 gso: explicitly disabled via build config 00:01:34.294 ip_frag: explicitly disabled via build config 00:01:34.294 jobstats: explicitly disabled via build config 00:01:34.294 latencystats: explicitly disabled via build config 00:01:34.294 lpm: explicitly disabled via build config 00:01:34.294 member: explicitly disabled via build config 00:01:34.294 pcapng: explicitly disabled via build config 00:01:34.294 rawdev: explicitly disabled via build config 00:01:34.294 regexdev: explicitly disabled via build config 00:01:34.294 mldev: explicitly disabled via build config 00:01:34.294 rib: explicitly disabled via build config 00:01:34.294 sched: explicitly disabled via build config 00:01:34.294 stack: explicitly disabled via build config 00:01:34.294 ipsec: explicitly disabled via build config 00:01:34.294 pdcp: explicitly disabled via build config 00:01:34.294 fib: explicitly disabled via build config 00:01:34.294 port: explicitly disabled via build config 00:01:34.294 pdump: explicitly disabled via build config 00:01:34.294 table: explicitly disabled via build config 00:01:34.294 pipeline: explicitly disabled via build config 00:01:34.294 graph: explicitly disabled via build config 00:01:34.294 node: explicitly disabled via build config 00:01:34.294 00:01:34.294 drivers: 00:01:34.294 common/cpt: not in enabled drivers build config 00:01:34.294 common/dpaax: not in enabled drivers build config 00:01:34.294 common/iavf: not in enabled drivers build config 00:01:34.294 common/idpf: not in enabled drivers build config 00:01:34.294 common/ionic: not in enabled drivers build config 00:01:34.294 common/mvep: not in enabled drivers build config 00:01:34.294 common/octeontx: not in enabled drivers build config 00:01:34.294 bus/auxiliary: not in enabled drivers build config 00:01:34.294 bus/cdx: not in enabled drivers build config 00:01:34.294 bus/dpaa: not in enabled drivers build config 00:01:34.294 bus/fslmc: not in enabled drivers build config 00:01:34.294 bus/ifpga: not in enabled drivers build config 00:01:34.294 bus/platform: not in enabled drivers build config 00:01:34.294 bus/uacce: not in enabled drivers build config 00:01:34.294 bus/vmbus: not in enabled drivers build config 00:01:34.294 common/cnxk: not in enabled drivers build config 00:01:34.294 common/mlx5: not in enabled drivers build config 00:01:34.294 common/nfp: not in enabled drivers build config 00:01:34.294 common/nitrox: not in enabled drivers build config 00:01:34.294 common/qat: not in enabled drivers build config 00:01:34.294 common/sfc_efx: not in enabled drivers build config 00:01:34.294 mempool/bucket: not in enabled drivers build config 00:01:34.294 mempool/cnxk: not in enabled drivers build config 00:01:34.294 mempool/dpaa: not in enabled drivers build config 00:01:34.294 mempool/dpaa2: not in enabled drivers build config 00:01:34.294 mempool/octeontx: not in enabled drivers build config 00:01:34.294 mempool/stack: not in enabled drivers build config 00:01:34.294 dma/cnxk: not in enabled drivers build config 00:01:34.294 dma/dpaa: not in enabled drivers build config 00:01:34.294 dma/dpaa2: not in enabled drivers build config 00:01:34.294 dma/hisilicon: not in enabled drivers build config 00:01:34.294 dma/idxd: not in enabled drivers build config 00:01:34.294 dma/ioat: not in enabled drivers build config 00:01:34.294 dma/skeleton: not in enabled drivers build config 00:01:34.294 net/af_packet: not in enabled drivers build config 00:01:34.294 net/af_xdp: not in enabled drivers build config 00:01:34.295 net/ark: not in enabled drivers build config 00:01:34.295 net/atlantic: not in enabled drivers build config 00:01:34.295 net/avp: not in enabled drivers build config 00:01:34.295 net/axgbe: not in enabled drivers build config 00:01:34.295 net/bnx2x: not in enabled drivers build config 00:01:34.295 net/bnxt: not in enabled drivers build config 00:01:34.295 net/bonding: not in enabled drivers build config 00:01:34.295 net/cnxk: not in enabled drivers build config 00:01:34.295 net/cpfl: not in enabled drivers build config 00:01:34.295 net/cxgbe: not in enabled drivers build config 00:01:34.295 net/dpaa: not in enabled drivers build config 00:01:34.295 net/dpaa2: not in enabled drivers build config 00:01:34.295 net/e1000: not in enabled drivers build config 00:01:34.295 net/ena: not in enabled drivers build config 00:01:34.295 net/enetc: not in enabled drivers build config 00:01:34.295 net/enetfec: not in enabled drivers build config 00:01:34.295 net/enic: not in enabled drivers build config 00:01:34.295 net/failsafe: not in enabled drivers build config 00:01:34.295 net/fm10k: not in enabled drivers build config 00:01:34.295 net/gve: not in enabled drivers build config 00:01:34.295 net/hinic: not in enabled drivers build config 00:01:34.295 net/hns3: not in enabled drivers build config 00:01:34.295 net/i40e: not in enabled drivers build config 00:01:34.295 net/iavf: not in enabled drivers build config 00:01:34.295 net/ice: not in enabled drivers build config 00:01:34.295 net/idpf: not in enabled drivers build config 00:01:34.295 net/igc: not in enabled drivers build config 00:01:34.295 net/ionic: not in enabled drivers build config 00:01:34.295 net/ipn3ke: not in enabled drivers build config 00:01:34.295 net/ixgbe: not in enabled drivers build config 00:01:34.295 net/mana: not in enabled drivers build config 00:01:34.295 net/memif: not in enabled drivers build config 00:01:34.295 net/mlx4: not in enabled drivers build config 00:01:34.295 net/mlx5: not in enabled drivers build config 00:01:34.295 net/mvneta: not in enabled drivers build config 00:01:34.295 net/mvpp2: not in enabled drivers build config 00:01:34.295 net/netvsc: not in enabled drivers build config 00:01:34.295 net/nfb: not in enabled drivers build config 00:01:34.295 net/nfp: not in enabled drivers build config 00:01:34.295 net/ngbe: not in enabled drivers build config 00:01:34.295 net/null: not in enabled drivers build config 00:01:34.295 net/octeontx: not in enabled drivers build config 00:01:34.295 net/octeon_ep: not in enabled drivers build config 00:01:34.295 net/pcap: not in enabled drivers build config 00:01:34.295 net/pfe: not in enabled drivers build config 00:01:34.295 net/qede: not in enabled drivers build config 00:01:34.295 net/ring: not in enabled drivers build config 00:01:34.295 net/sfc: not in enabled drivers build config 00:01:34.295 net/softnic: not in enabled drivers build config 00:01:34.295 net/tap: not in enabled drivers build config 00:01:34.295 net/thunderx: not in enabled drivers build config 00:01:34.295 net/txgbe: not in enabled drivers build config 00:01:34.295 net/vdev_netvsc: not in enabled drivers build config 00:01:34.295 net/vhost: not in enabled drivers build config 00:01:34.295 net/virtio: not in enabled drivers build config 00:01:34.295 net/vmxnet3: not in enabled drivers build config 00:01:34.295 raw/*: missing internal dependency, "rawdev" 00:01:34.295 crypto/armv8: not in enabled drivers build config 00:01:34.295 crypto/bcmfs: not in enabled drivers build config 00:01:34.295 crypto/caam_jr: not in enabled drivers build config 00:01:34.295 crypto/ccp: not in enabled drivers build config 00:01:34.295 crypto/cnxk: not in enabled drivers build config 00:01:34.295 crypto/dpaa_sec: not in enabled drivers build config 00:01:34.295 crypto/dpaa2_sec: not in enabled drivers build config 00:01:34.295 crypto/ipsec_mb: not in enabled drivers build config 00:01:34.295 crypto/mlx5: not in enabled drivers build config 00:01:34.295 crypto/mvsam: not in enabled drivers build config 00:01:34.295 crypto/nitrox: not in enabled drivers build config 00:01:34.295 crypto/null: not in enabled drivers build config 00:01:34.295 crypto/octeontx: not in enabled drivers build config 00:01:34.295 crypto/openssl: not in enabled drivers build config 00:01:34.295 crypto/scheduler: not in enabled drivers build config 00:01:34.295 crypto/uadk: not in enabled drivers build config 00:01:34.295 crypto/virtio: not in enabled drivers build config 00:01:34.295 compress/isal: not in enabled drivers build config 00:01:34.295 compress/mlx5: not in enabled drivers build config 00:01:34.295 compress/nitrox: not in enabled drivers build config 00:01:34.295 compress/octeontx: not in enabled drivers build config 00:01:34.295 compress/zlib: not in enabled drivers build config 00:01:34.295 regex/*: missing internal dependency, "regexdev" 00:01:34.295 ml/*: missing internal dependency, "mldev" 00:01:34.295 vdpa/ifc: not in enabled drivers build config 00:01:34.295 vdpa/mlx5: not in enabled drivers build config 00:01:34.295 vdpa/nfp: not in enabled drivers build config 00:01:34.295 vdpa/sfc: not in enabled drivers build config 00:01:34.295 event/*: missing internal dependency, "eventdev" 00:01:34.295 baseband/*: missing internal dependency, "bbdev" 00:01:34.295 gpu/*: missing internal dependency, "gpudev" 00:01:34.295 00:01:34.295 00:01:34.295 Build targets in project: 84 00:01:34.295 00:01:34.295 DPDK 24.03.0 00:01:34.295 00:01:34.295 User defined options 00:01:34.295 buildtype : debug 00:01:34.295 default_library : shared 00:01:34.295 libdir : lib 00:01:34.295 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:34.295 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:34.295 c_link_args : 00:01:34.295 cpu_instruction_set: native 00:01:34.295 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:34.295 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:34.295 enable_docs : false 00:01:34.295 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:34.295 enable_kmods : false 00:01:34.295 max_lcores : 128 00:01:34.295 tests : false 00:01:34.295 00:01:34.295 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:34.295 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:34.570 [1/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:34.570 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:34.570 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:34.570 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:34.570 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:34.570 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:34.570 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:34.570 [8/267] Linking static target lib/librte_log.a 00:01:34.570 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:34.570 [10/267] Linking static target lib/librte_kvargs.a 00:01:34.570 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:34.570 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:34.570 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:34.570 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:34.570 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:34.570 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:34.570 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:34.570 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:34.570 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:34.570 [20/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:34.570 [21/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:34.570 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:34.570 [23/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:34.570 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:34.570 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:34.570 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:34.570 [27/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:34.570 [28/267] Linking static target lib/librte_pci.a 00:01:34.570 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:34.570 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:34.840 [31/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:34.840 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:34.840 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:34.840 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:34.841 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:34.841 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:34.841 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:34.841 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:34.841 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:34.841 [40/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:34.841 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:34.841 [42/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.841 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:34.841 [44/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:34.841 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:34.841 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:34.841 [47/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:34.841 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:34.841 [49/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:34.841 [50/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:35.100 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:35.100 [52/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:35.100 [53/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.100 [54/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:35.100 [55/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:35.100 [56/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:35.100 [57/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:35.100 [58/267] Linking static target lib/librte_telemetry.a 00:01:35.100 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:35.100 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:35.100 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:35.100 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:35.100 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:35.100 [64/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:35.100 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:35.100 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:35.100 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:35.100 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:35.100 [69/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:35.100 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:35.100 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:35.100 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:35.100 [73/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:35.100 [74/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:35.100 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:35.100 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:35.100 [77/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:35.100 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:35.100 [79/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:35.100 [80/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:35.100 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:35.100 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:35.100 [83/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:35.100 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:35.100 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:35.100 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:35.100 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:35.100 [88/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:35.100 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:35.100 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:35.100 [91/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:35.100 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:35.100 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:35.100 [94/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:35.100 [95/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:35.100 [96/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:35.100 [97/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:35.101 [98/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:35.101 [99/267] Linking static target lib/librte_meter.a 00:01:35.101 [100/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:35.101 [101/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:35.101 [102/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:35.101 [103/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:35.101 [104/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:35.101 [105/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:35.101 [106/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:35.101 [107/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:35.101 [108/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:35.101 [109/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:35.101 [110/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:35.101 [111/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:35.101 [112/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:35.101 [113/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:35.101 [114/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:35.101 [115/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:35.101 [116/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:35.101 [117/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:35.101 [118/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.101 [119/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:35.101 [120/267] Linking static target lib/librte_ring.a 00:01:35.101 [121/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:35.101 [122/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:35.101 [123/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:35.101 [124/267] Linking static target lib/librte_cmdline.a 00:01:35.101 [125/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:35.101 [126/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:35.101 [127/267] Linking static target lib/librte_timer.a 00:01:35.101 [128/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:35.101 [129/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:35.101 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:35.101 [131/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:35.101 [132/267] Linking target lib/librte_log.so.24.1 00:01:35.101 [133/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:35.101 [134/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:35.101 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:35.101 [136/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:35.101 [137/267] Linking static target lib/librte_mempool.a 00:01:35.101 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:35.101 [139/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:35.101 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:35.101 [141/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:35.101 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:35.101 [143/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:35.101 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:35.101 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:35.101 [146/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:35.101 [147/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:35.101 [148/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:35.101 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:35.361 [150/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:35.361 [151/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:35.361 [152/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:35.361 [153/267] Linking static target lib/librte_rcu.a 00:01:35.361 [154/267] Linking static target lib/librte_net.a 00:01:35.361 [155/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:35.361 [156/267] Linking static target lib/librte_reorder.a 00:01:35.361 [157/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:35.361 [158/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:35.361 [159/267] Linking static target lib/librte_dmadev.a 00:01:35.361 [160/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:35.361 [161/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:35.361 [162/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:35.361 [163/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:35.361 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:35.361 [165/267] Linking static target lib/librte_compressdev.a 00:01:35.361 [166/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:35.361 [167/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:35.361 [168/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:35.361 [169/267] Linking static target lib/librte_power.a 00:01:35.361 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:35.361 [171/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:35.361 [172/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:35.361 [173/267] Linking static target lib/librte_eal.a 00:01:35.361 [174/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:35.361 [175/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:35.361 [176/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:35.361 [177/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:35.361 [178/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:35.361 [179/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.361 [180/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:35.361 [181/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:35.361 [182/267] Linking static target lib/librte_mbuf.a 00:01:35.361 [183/267] Linking target lib/librte_kvargs.so.24.1 00:01:35.361 [184/267] Linking static target lib/librte_security.a 00:01:35.361 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:35.361 [186/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:35.361 [187/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:35.361 [188/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:35.361 [189/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:35.361 [190/267] Linking static target drivers/librte_bus_vdev.a 00:01:35.361 [191/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:35.361 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:35.361 [193/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.619 [194/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:35.619 [195/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:35.619 [196/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:35.619 [197/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:35.619 [198/267] Linking static target drivers/librte_bus_pci.a 00:01:35.619 [199/267] Linking static target lib/librte_hash.a 00:01:35.619 [200/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:35.619 [201/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:35.619 [202/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:35.619 [203/267] Linking static target drivers/librte_mempool_ring.a 00:01:35.619 [204/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.619 [205/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:35.619 [206/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.619 [207/267] Linking static target lib/librte_cryptodev.a 00:01:35.619 [208/267] Linking target lib/librte_telemetry.so.24.1 00:01:35.619 [209/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.619 [210/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.619 [211/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:35.619 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.619 [213/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:35.878 [214/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.878 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.137 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.137 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:36.137 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.137 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:36.137 [220/267] Linking static target lib/librte_ethdev.a 00:01:36.137 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.137 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.396 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.396 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.396 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.396 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.338 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:37.338 [228/267] Linking static target lib/librte_vhost.a 00:01:37.911 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.824 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.412 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.983 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.983 [233/267] Linking target lib/librte_eal.so.24.1 00:01:46.983 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:46.983 [235/267] Linking target lib/librte_dmadev.so.24.1 00:01:47.243 [236/267] Linking target lib/librte_ring.so.24.1 00:01:47.243 [237/267] Linking target lib/librte_meter.so.24.1 00:01:47.243 [238/267] Linking target lib/librte_timer.so.24.1 00:01:47.243 [239/267] Linking target lib/librte_pci.so.24.1 00:01:47.243 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:47.243 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:47.243 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:47.243 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:47.244 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:47.244 [245/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:47.244 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:47.244 [247/267] Linking target lib/librte_rcu.so.24.1 00:01:47.244 [248/267] Linking target lib/librte_mempool.so.24.1 00:01:47.504 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:47.504 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:47.504 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:47.504 [252/267] Linking target lib/librte_mbuf.so.24.1 00:01:47.504 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:47.765 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:01:47.765 [255/267] Linking target lib/librte_reorder.so.24.1 00:01:47.765 [256/267] Linking target lib/librte_net.so.24.1 00:01:47.765 [257/267] Linking target lib/librte_compressdev.so.24.1 00:01:47.765 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:47.765 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:47.765 [260/267] Linking target lib/librte_security.so.24.1 00:01:47.765 [261/267] Linking target lib/librte_hash.so.24.1 00:01:47.765 [262/267] Linking target lib/librte_cmdline.so.24.1 00:01:47.765 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:48.026 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:48.026 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:48.026 [266/267] Linking target lib/librte_power.so.24.1 00:01:48.026 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:48.026 INFO: autodetecting backend as ninja 00:01:48.026 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:49.411 CC lib/ut_mock/mock.o 00:01:49.411 CC lib/log/log.o 00:01:49.411 CC lib/log/log_flags.o 00:01:49.412 CC lib/log/log_deprecated.o 00:01:49.412 CC lib/ut/ut.o 00:01:49.412 LIB libspdk_ut_mock.a 00:01:49.412 LIB libspdk_log.a 00:01:49.412 LIB libspdk_ut.a 00:01:49.412 SO libspdk_ut_mock.so.6.0 00:01:49.412 SO libspdk_ut.so.2.0 00:01:49.412 SO libspdk_log.so.7.0 00:01:49.412 SYMLINK libspdk_ut_mock.so 00:01:49.412 SYMLINK libspdk_ut.so 00:01:49.412 SYMLINK libspdk_log.so 00:01:49.983 CXX lib/trace_parser/trace.o 00:01:49.983 CC lib/ioat/ioat.o 00:01:49.983 CC lib/util/base64.o 00:01:49.983 CC lib/util/bit_array.o 00:01:49.983 CC lib/util/cpuset.o 00:01:49.983 CC lib/util/crc16.o 00:01:49.983 CC lib/util/crc32.o 00:01:49.983 CC lib/util/crc32c.o 00:01:49.983 CC lib/dma/dma.o 00:01:49.983 CC lib/util/crc32_ieee.o 00:01:49.983 CC lib/util/fd.o 00:01:49.983 CC lib/util/crc64.o 00:01:49.983 CC lib/util/dif.o 00:01:49.983 CC lib/util/fd_group.o 00:01:49.983 CC lib/util/file.o 00:01:49.983 CC lib/util/hexlify.o 00:01:49.983 CC lib/util/iov.o 00:01:49.983 CC lib/util/math.o 00:01:49.983 CC lib/util/pipe.o 00:01:49.983 CC lib/util/net.o 00:01:49.983 CC lib/util/strerror_tls.o 00:01:49.983 CC lib/util/string.o 00:01:49.983 CC lib/util/uuid.o 00:01:49.983 CC lib/util/xor.o 00:01:49.983 CC lib/util/zipf.o 00:01:49.983 CC lib/vfio_user/host/vfio_user_pci.o 00:01:49.983 CC lib/vfio_user/host/vfio_user.o 00:01:49.983 LIB libspdk_dma.a 00:01:49.983 SO libspdk_dma.so.4.0 00:01:50.244 LIB libspdk_ioat.a 00:01:50.244 SYMLINK libspdk_dma.so 00:01:50.244 SO libspdk_ioat.so.7.0 00:01:50.244 SYMLINK libspdk_ioat.so 00:01:50.244 LIB libspdk_vfio_user.a 00:01:50.244 SO libspdk_vfio_user.so.5.0 00:01:50.244 LIB libspdk_util.a 00:01:50.244 SYMLINK libspdk_vfio_user.so 00:01:50.506 SO libspdk_util.so.9.1 00:01:50.506 SYMLINK libspdk_util.so 00:01:50.506 LIB libspdk_trace_parser.a 00:01:50.766 SO libspdk_trace_parser.so.5.0 00:01:50.766 SYMLINK libspdk_trace_parser.so 00:01:51.026 CC lib/json/json_parse.o 00:01:51.026 CC lib/json/json_write.o 00:01:51.026 CC lib/json/json_util.o 00:01:51.026 CC lib/vmd/vmd.o 00:01:51.026 CC lib/vmd/led.o 00:01:51.026 CC lib/conf/conf.o 00:01:51.026 CC lib/rdma_provider/common.o 00:01:51.026 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:51.026 CC lib/rdma_utils/rdma_utils.o 00:01:51.026 CC lib/idxd/idxd.o 00:01:51.026 CC lib/idxd/idxd_user.o 00:01:51.026 CC lib/env_dpdk/env.o 00:01:51.026 CC lib/env_dpdk/pci.o 00:01:51.026 CC lib/idxd/idxd_kernel.o 00:01:51.026 CC lib/env_dpdk/memory.o 00:01:51.026 CC lib/env_dpdk/init.o 00:01:51.027 CC lib/env_dpdk/threads.o 00:01:51.027 CC lib/env_dpdk/pci_ioat.o 00:01:51.027 CC lib/env_dpdk/pci_virtio.o 00:01:51.027 CC lib/env_dpdk/pci_vmd.o 00:01:51.027 CC lib/env_dpdk/pci_idxd.o 00:01:51.027 CC lib/env_dpdk/pci_event.o 00:01:51.027 CC lib/env_dpdk/sigbus_handler.o 00:01:51.027 CC lib/env_dpdk/pci_dpdk.o 00:01:51.027 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:51.027 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:51.286 LIB libspdk_rdma_provider.a 00:01:51.286 SO libspdk_rdma_provider.so.6.0 00:01:51.286 LIB libspdk_conf.a 00:01:51.286 LIB libspdk_rdma_utils.a 00:01:51.286 LIB libspdk_json.a 00:01:51.286 SO libspdk_conf.so.6.0 00:01:51.286 SYMLINK libspdk_rdma_provider.so 00:01:51.286 SO libspdk_rdma_utils.so.1.0 00:01:51.286 SO libspdk_json.so.6.0 00:01:51.286 SYMLINK libspdk_conf.so 00:01:51.286 SYMLINK libspdk_rdma_utils.so 00:01:51.286 SYMLINK libspdk_json.so 00:01:51.547 LIB libspdk_idxd.a 00:01:51.547 SO libspdk_idxd.so.12.0 00:01:51.547 LIB libspdk_vmd.a 00:01:51.547 SO libspdk_vmd.so.6.0 00:01:51.547 SYMLINK libspdk_idxd.so 00:01:51.547 SYMLINK libspdk_vmd.so 00:01:51.807 CC lib/jsonrpc/jsonrpc_server.o 00:01:51.807 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:51.807 CC lib/jsonrpc/jsonrpc_client.o 00:01:51.807 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:52.067 LIB libspdk_jsonrpc.a 00:01:52.067 SO libspdk_jsonrpc.so.6.0 00:01:52.067 SYMLINK libspdk_jsonrpc.so 00:01:52.067 LIB libspdk_env_dpdk.a 00:01:52.327 SO libspdk_env_dpdk.so.15.0 00:01:52.327 SYMLINK libspdk_env_dpdk.so 00:01:52.588 CC lib/rpc/rpc.o 00:01:52.588 LIB libspdk_rpc.a 00:01:52.848 SO libspdk_rpc.so.6.0 00:01:52.848 SYMLINK libspdk_rpc.so 00:01:53.109 CC lib/trace/trace.o 00:01:53.109 CC lib/trace/trace_flags.o 00:01:53.109 CC lib/trace/trace_rpc.o 00:01:53.109 CC lib/keyring/keyring.o 00:01:53.109 CC lib/keyring/keyring_rpc.o 00:01:53.109 CC lib/notify/notify.o 00:01:53.109 CC lib/notify/notify_rpc.o 00:01:53.369 LIB libspdk_notify.a 00:01:53.369 LIB libspdk_keyring.a 00:01:53.369 LIB libspdk_trace.a 00:01:53.369 SO libspdk_notify.so.6.0 00:01:53.369 SO libspdk_keyring.so.1.0 00:01:53.369 SO libspdk_trace.so.10.0 00:01:53.369 SYMLINK libspdk_notify.so 00:01:53.369 SYMLINK libspdk_keyring.so 00:01:53.630 SYMLINK libspdk_trace.so 00:01:53.889 CC lib/thread/thread.o 00:01:53.889 CC lib/thread/iobuf.o 00:01:53.889 CC lib/sock/sock.o 00:01:53.890 CC lib/sock/sock_rpc.o 00:01:54.150 LIB libspdk_sock.a 00:01:54.150 SO libspdk_sock.so.10.0 00:01:54.411 SYMLINK libspdk_sock.so 00:01:54.670 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:54.670 CC lib/nvme/nvme_ctrlr.o 00:01:54.670 CC lib/nvme/nvme_ns_cmd.o 00:01:54.670 CC lib/nvme/nvme_fabric.o 00:01:54.670 CC lib/nvme/nvme_ns.o 00:01:54.670 CC lib/nvme/nvme_pcie_common.o 00:01:54.670 CC lib/nvme/nvme_pcie.o 00:01:54.670 CC lib/nvme/nvme_qpair.o 00:01:54.670 CC lib/nvme/nvme.o 00:01:54.670 CC lib/nvme/nvme_quirks.o 00:01:54.670 CC lib/nvme/nvme_transport.o 00:01:54.670 CC lib/nvme/nvme_discovery.o 00:01:54.670 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:54.670 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:54.670 CC lib/nvme/nvme_tcp.o 00:01:54.670 CC lib/nvme/nvme_io_msg.o 00:01:54.670 CC lib/nvme/nvme_opal.o 00:01:54.670 CC lib/nvme/nvme_poll_group.o 00:01:54.670 CC lib/nvme/nvme_zns.o 00:01:54.670 CC lib/nvme/nvme_stubs.o 00:01:54.670 CC lib/nvme/nvme_auth.o 00:01:54.670 CC lib/nvme/nvme_cuse.o 00:01:54.670 CC lib/nvme/nvme_vfio_user.o 00:01:54.670 CC lib/nvme/nvme_rdma.o 00:01:54.938 LIB libspdk_thread.a 00:01:55.198 SO libspdk_thread.so.10.1 00:01:55.198 SYMLINK libspdk_thread.so 00:01:55.457 CC lib/init/json_config.o 00:01:55.457 CC lib/init/subsystem.o 00:01:55.457 CC lib/init/rpc.o 00:01:55.457 CC lib/init/subsystem_rpc.o 00:01:55.457 CC lib/blob/blobstore.o 00:01:55.457 CC lib/accel/accel.o 00:01:55.457 CC lib/accel/accel_rpc.o 00:01:55.457 CC lib/blob/request.o 00:01:55.457 CC lib/accel/accel_sw.o 00:01:55.457 CC lib/blob/zeroes.o 00:01:55.457 CC lib/blob/blob_bs_dev.o 00:01:55.457 CC lib/virtio/virtio.o 00:01:55.457 CC lib/virtio/virtio_vhost_user.o 00:01:55.457 CC lib/virtio/virtio_vfio_user.o 00:01:55.457 CC lib/virtio/virtio_pci.o 00:01:55.457 CC lib/vfu_tgt/tgt_endpoint.o 00:01:55.457 CC lib/vfu_tgt/tgt_rpc.o 00:01:55.717 LIB libspdk_init.a 00:01:55.717 SO libspdk_init.so.5.0 00:01:55.717 LIB libspdk_vfu_tgt.a 00:01:55.717 LIB libspdk_virtio.a 00:01:55.717 SYMLINK libspdk_init.so 00:01:55.717 SO libspdk_virtio.so.7.0 00:01:55.977 SO libspdk_vfu_tgt.so.3.0 00:01:55.977 SYMLINK libspdk_virtio.so 00:01:55.977 SYMLINK libspdk_vfu_tgt.so 00:01:56.235 CC lib/event/app.o 00:01:56.235 CC lib/event/reactor.o 00:01:56.235 CC lib/event/log_rpc.o 00:01:56.235 CC lib/event/scheduler_static.o 00:01:56.235 CC lib/event/app_rpc.o 00:01:56.235 LIB libspdk_accel.a 00:01:56.494 SO libspdk_accel.so.15.1 00:01:56.495 LIB libspdk_nvme.a 00:01:56.495 SYMLINK libspdk_accel.so 00:01:56.495 LIB libspdk_event.a 00:01:56.495 SO libspdk_nvme.so.13.1 00:01:56.495 SO libspdk_event.so.14.0 00:01:56.755 SYMLINK libspdk_event.so 00:01:56.755 CC lib/bdev/bdev.o 00:01:56.755 CC lib/bdev/bdev_rpc.o 00:01:56.755 CC lib/bdev/bdev_zone.o 00:01:56.755 CC lib/bdev/part.o 00:01:56.755 CC lib/bdev/scsi_nvme.o 00:01:56.755 SYMLINK libspdk_nvme.so 00:01:58.137 LIB libspdk_blob.a 00:01:58.137 SO libspdk_blob.so.11.0 00:01:58.137 SYMLINK libspdk_blob.so 00:01:58.708 CC lib/blobfs/blobfs.o 00:01:58.708 CC lib/lvol/lvol.o 00:01:58.708 CC lib/blobfs/tree.o 00:01:58.969 LIB libspdk_bdev.a 00:01:58.969 SO libspdk_bdev.so.15.1 00:01:59.230 SYMLINK libspdk_bdev.so 00:01:59.230 LIB libspdk_blobfs.a 00:01:59.230 SO libspdk_blobfs.so.10.0 00:01:59.230 LIB libspdk_lvol.a 00:01:59.491 SO libspdk_lvol.so.10.0 00:01:59.491 SYMLINK libspdk_blobfs.so 00:01:59.491 SYMLINK libspdk_lvol.so 00:01:59.491 CC lib/scsi/dev.o 00:01:59.491 CC lib/scsi/lun.o 00:01:59.491 CC lib/scsi/port.o 00:01:59.491 CC lib/nbd/nbd.o 00:01:59.491 CC lib/scsi/scsi.o 00:01:59.491 CC lib/nbd/nbd_rpc.o 00:01:59.491 CC lib/scsi/scsi_bdev.o 00:01:59.491 CC lib/scsi/scsi_pr.o 00:01:59.491 CC lib/ftl/ftl_core.o 00:01:59.491 CC lib/ftl/ftl_init.o 00:01:59.491 CC lib/scsi/scsi_rpc.o 00:01:59.491 CC lib/ftl/ftl_layout.o 00:01:59.491 CC lib/scsi/task.o 00:01:59.491 CC lib/ftl/ftl_debug.o 00:01:59.491 CC lib/ftl/ftl_io.o 00:01:59.491 CC lib/ftl/ftl_sb.o 00:01:59.491 CC lib/ftl/ftl_l2p.o 00:01:59.491 CC lib/ftl/ftl_nv_cache.o 00:01:59.491 CC lib/ftl/ftl_l2p_flat.o 00:01:59.491 CC lib/ftl/ftl_band.o 00:01:59.491 CC lib/ftl/ftl_band_ops.o 00:01:59.491 CC lib/ftl/ftl_writer.o 00:01:59.491 CC lib/ftl/ftl_rq.o 00:01:59.491 CC lib/ftl/ftl_reloc.o 00:01:59.491 CC lib/ftl/ftl_l2p_cache.o 00:01:59.491 CC lib/ftl/ftl_p2l.o 00:01:59.491 CC lib/ublk/ublk.o 00:01:59.491 CC lib/ftl/mngt/ftl_mngt.o 00:01:59.491 CC lib/ublk/ublk_rpc.o 00:01:59.491 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:59.491 CC lib/nvmf/ctrlr.o 00:01:59.491 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:59.491 CC lib/nvmf/ctrlr_discovery.o 00:01:59.491 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:59.491 CC lib/nvmf/ctrlr_bdev.o 00:01:59.491 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:59.491 CC lib/nvmf/subsystem.o 00:01:59.491 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:59.491 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:59.491 CC lib/nvmf/nvmf.o 00:01:59.491 CC lib/nvmf/nvmf_rpc.o 00:01:59.491 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:59.491 CC lib/nvmf/transport.o 00:01:59.491 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:59.491 CC lib/nvmf/tcp.o 00:01:59.491 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:59.491 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:59.491 CC lib/nvmf/stubs.o 00:01:59.491 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:59.491 CC lib/nvmf/mdns_server.o 00:01:59.491 CC lib/nvmf/vfio_user.o 00:01:59.491 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:59.491 CC lib/ftl/utils/ftl_md.o 00:01:59.491 CC lib/nvmf/rdma.o 00:01:59.491 CC lib/nvmf/auth.o 00:01:59.491 CC lib/ftl/utils/ftl_conf.o 00:01:59.491 CC lib/ftl/utils/ftl_mempool.o 00:01:59.491 CC lib/ftl/utils/ftl_property.o 00:01:59.491 CC lib/ftl/utils/ftl_bitmap.o 00:01:59.491 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:59.491 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:59.491 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:59.491 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:59.491 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:59.491 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:59.491 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:59.491 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:59.491 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:59.491 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:59.491 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:59.491 CC lib/ftl/base/ftl_base_dev.o 00:01:59.491 CC lib/ftl/base/ftl_base_bdev.o 00:01:59.491 CC lib/ftl/ftl_trace.o 00:02:00.059 LIB libspdk_nbd.a 00:02:00.059 SO libspdk_nbd.so.7.0 00:02:00.059 SYMLINK libspdk_nbd.so 00:02:00.059 LIB libspdk_scsi.a 00:02:00.059 SO libspdk_scsi.so.9.0 00:02:00.320 SYMLINK libspdk_scsi.so 00:02:00.320 LIB libspdk_ublk.a 00:02:00.320 SO libspdk_ublk.so.3.0 00:02:00.320 SYMLINK libspdk_ublk.so 00:02:00.580 LIB libspdk_ftl.a 00:02:00.580 CC lib/vhost/vhost.o 00:02:00.580 CC lib/iscsi/conn.o 00:02:00.580 CC lib/vhost/vhost_rpc.o 00:02:00.580 CC lib/iscsi/init_grp.o 00:02:00.580 CC lib/vhost/vhost_scsi.o 00:02:00.580 CC lib/iscsi/iscsi.o 00:02:00.580 CC lib/vhost/vhost_blk.o 00:02:00.580 CC lib/iscsi/md5.o 00:02:00.580 CC lib/vhost/rte_vhost_user.o 00:02:00.580 CC lib/iscsi/param.o 00:02:00.580 CC lib/iscsi/portal_grp.o 00:02:00.580 CC lib/iscsi/tgt_node.o 00:02:00.580 CC lib/iscsi/iscsi_subsystem.o 00:02:00.580 CC lib/iscsi/iscsi_rpc.o 00:02:00.580 CC lib/iscsi/task.o 00:02:00.580 SO libspdk_ftl.so.9.0 00:02:01.152 SYMLINK libspdk_ftl.so 00:02:01.414 LIB libspdk_nvmf.a 00:02:01.414 SO libspdk_nvmf.so.19.0 00:02:01.414 LIB libspdk_vhost.a 00:02:01.674 SO libspdk_vhost.so.8.0 00:02:01.674 SYMLINK libspdk_nvmf.so 00:02:01.674 SYMLINK libspdk_vhost.so 00:02:01.674 LIB libspdk_iscsi.a 00:02:01.935 SO libspdk_iscsi.so.8.0 00:02:01.935 SYMLINK libspdk_iscsi.so 00:02:02.507 CC module/env_dpdk/env_dpdk_rpc.o 00:02:02.507 CC module/vfu_device/vfu_virtio.o 00:02:02.507 CC module/vfu_device/vfu_virtio_blk.o 00:02:02.507 CC module/vfu_device/vfu_virtio_scsi.o 00:02:02.507 CC module/vfu_device/vfu_virtio_rpc.o 00:02:02.768 CC module/accel/error/accel_error.o 00:02:02.768 CC module/accel/error/accel_error_rpc.o 00:02:02.768 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:02.768 CC module/accel/dsa/accel_dsa.o 00:02:02.768 CC module/accel/dsa/accel_dsa_rpc.o 00:02:02.768 LIB libspdk_env_dpdk_rpc.a 00:02:02.768 CC module/accel/iaa/accel_iaa.o 00:02:02.768 CC module/accel/iaa/accel_iaa_rpc.o 00:02:02.768 CC module/keyring/file/keyring.o 00:02:02.768 CC module/keyring/file/keyring_rpc.o 00:02:02.768 CC module/accel/ioat/accel_ioat.o 00:02:02.768 CC module/accel/ioat/accel_ioat_rpc.o 00:02:02.768 CC module/sock/posix/posix.o 00:02:02.768 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:02.768 CC module/blob/bdev/blob_bdev.o 00:02:02.768 CC module/keyring/linux/keyring.o 00:02:02.768 CC module/keyring/linux/keyring_rpc.o 00:02:02.768 CC module/scheduler/gscheduler/gscheduler.o 00:02:02.768 SO libspdk_env_dpdk_rpc.so.6.0 00:02:02.768 SYMLINK libspdk_env_dpdk_rpc.so 00:02:03.028 LIB libspdk_accel_error.a 00:02:03.028 LIB libspdk_keyring_file.a 00:02:03.028 LIB libspdk_scheduler_gscheduler.a 00:02:03.028 LIB libspdk_keyring_linux.a 00:02:03.028 LIB libspdk_scheduler_dpdk_governor.a 00:02:03.028 LIB libspdk_scheduler_dynamic.a 00:02:03.028 SO libspdk_keyring_file.so.1.0 00:02:03.028 SO libspdk_scheduler_gscheduler.so.4.0 00:02:03.028 SO libspdk_accel_error.so.2.0 00:02:03.028 LIB libspdk_accel_ioat.a 00:02:03.028 LIB libspdk_accel_iaa.a 00:02:03.028 SO libspdk_keyring_linux.so.1.0 00:02:03.028 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:03.028 SO libspdk_scheduler_dynamic.so.4.0 00:02:03.028 LIB libspdk_accel_dsa.a 00:02:03.028 SO libspdk_accel_ioat.so.6.0 00:02:03.028 SYMLINK libspdk_scheduler_gscheduler.so 00:02:03.028 SYMLINK libspdk_keyring_file.so 00:02:03.028 SO libspdk_accel_iaa.so.3.0 00:02:03.028 SO libspdk_accel_dsa.so.5.0 00:02:03.028 SYMLINK libspdk_accel_error.so 00:02:03.028 LIB libspdk_blob_bdev.a 00:02:03.028 SYMLINK libspdk_keyring_linux.so 00:02:03.028 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:03.028 SYMLINK libspdk_scheduler_dynamic.so 00:02:03.028 SO libspdk_blob_bdev.so.11.0 00:02:03.028 SYMLINK libspdk_accel_ioat.so 00:02:03.028 SYMLINK libspdk_accel_iaa.so 00:02:03.028 SYMLINK libspdk_accel_dsa.so 00:02:03.028 LIB libspdk_vfu_device.a 00:02:03.028 SYMLINK libspdk_blob_bdev.so 00:02:03.290 SO libspdk_vfu_device.so.3.0 00:02:03.290 SYMLINK libspdk_vfu_device.so 00:02:03.290 LIB libspdk_sock_posix.a 00:02:03.551 SO libspdk_sock_posix.so.6.0 00:02:03.551 SYMLINK libspdk_sock_posix.so 00:02:03.551 CC module/bdev/delay/vbdev_delay.o 00:02:03.551 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:03.810 CC module/bdev/gpt/gpt.o 00:02:03.810 CC module/bdev/gpt/vbdev_gpt.o 00:02:03.810 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:03.810 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:03.810 CC module/bdev/malloc/bdev_malloc.o 00:02:03.810 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:03.810 CC module/bdev/lvol/vbdev_lvol.o 00:02:03.810 CC module/bdev/error/vbdev_error.o 00:02:03.810 CC module/bdev/split/vbdev_split.o 00:02:03.810 CC module/bdev/split/vbdev_split_rpc.o 00:02:03.810 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:03.810 CC module/bdev/error/vbdev_error_rpc.o 00:02:03.810 CC module/blobfs/bdev/blobfs_bdev.o 00:02:03.810 CC module/bdev/passthru/vbdev_passthru.o 00:02:03.810 CC module/bdev/null/bdev_null.o 00:02:03.810 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:03.810 CC module/bdev/nvme/bdev_nvme.o 00:02:03.810 CC module/bdev/null/bdev_null_rpc.o 00:02:03.810 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:03.810 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:03.810 CC module/bdev/aio/bdev_aio.o 00:02:03.810 CC module/bdev/aio/bdev_aio_rpc.o 00:02:03.810 CC module/bdev/nvme/nvme_rpc.o 00:02:03.810 CC module/bdev/iscsi/bdev_iscsi.o 00:02:03.810 CC module/bdev/nvme/bdev_mdns_client.o 00:02:03.810 CC module/bdev/nvme/vbdev_opal.o 00:02:03.810 CC module/bdev/ftl/bdev_ftl.o 00:02:03.810 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:03.810 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:03.810 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:03.810 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:03.810 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:03.810 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:03.810 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:03.810 CC module/bdev/raid/bdev_raid.o 00:02:03.810 CC module/bdev/raid/bdev_raid_rpc.o 00:02:03.810 CC module/bdev/raid/bdev_raid_sb.o 00:02:03.810 CC module/bdev/raid/raid0.o 00:02:03.810 CC module/bdev/raid/raid1.o 00:02:03.810 CC module/bdev/raid/concat.o 00:02:04.070 LIB libspdk_blobfs_bdev.a 00:02:04.070 LIB libspdk_bdev_split.a 00:02:04.070 LIB libspdk_bdev_gpt.a 00:02:04.070 SO libspdk_blobfs_bdev.so.6.0 00:02:04.070 SO libspdk_bdev_split.so.6.0 00:02:04.070 LIB libspdk_bdev_null.a 00:02:04.070 SO libspdk_bdev_gpt.so.6.0 00:02:04.070 LIB libspdk_bdev_delay.a 00:02:04.070 LIB libspdk_bdev_error.a 00:02:04.070 LIB libspdk_bdev_zone_block.a 00:02:04.070 LIB libspdk_bdev_passthru.a 00:02:04.070 SO libspdk_bdev_null.so.6.0 00:02:04.070 SYMLINK libspdk_blobfs_bdev.so 00:02:04.070 SO libspdk_bdev_delay.so.6.0 00:02:04.070 SYMLINK libspdk_bdev_split.so 00:02:04.071 SO libspdk_bdev_error.so.6.0 00:02:04.071 LIB libspdk_bdev_ftl.a 00:02:04.071 SO libspdk_bdev_zone_block.so.6.0 00:02:04.071 LIB libspdk_bdev_aio.a 00:02:04.071 SO libspdk_bdev_passthru.so.6.0 00:02:04.071 LIB libspdk_bdev_malloc.a 00:02:04.071 SYMLINK libspdk_bdev_gpt.so 00:02:04.071 SO libspdk_bdev_ftl.so.6.0 00:02:04.071 SO libspdk_bdev_aio.so.6.0 00:02:04.071 SYMLINK libspdk_bdev_null.so 00:02:04.071 LIB libspdk_bdev_iscsi.a 00:02:04.071 SYMLINK libspdk_bdev_delay.so 00:02:04.071 SO libspdk_bdev_malloc.so.6.0 00:02:04.071 SYMLINK libspdk_bdev_error.so 00:02:04.071 SYMLINK libspdk_bdev_zone_block.so 00:02:04.071 SO libspdk_bdev_iscsi.so.6.0 00:02:04.071 SYMLINK libspdk_bdev_passthru.so 00:02:04.071 SYMLINK libspdk_bdev_ftl.so 00:02:04.071 SYMLINK libspdk_bdev_aio.so 00:02:04.071 SYMLINK libspdk_bdev_iscsi.so 00:02:04.071 SYMLINK libspdk_bdev_malloc.so 00:02:04.071 LIB libspdk_bdev_lvol.a 00:02:04.331 LIB libspdk_bdev_virtio.a 00:02:04.331 SO libspdk_bdev_lvol.so.6.0 00:02:04.331 SO libspdk_bdev_virtio.so.6.0 00:02:04.331 SYMLINK libspdk_bdev_lvol.so 00:02:04.331 SYMLINK libspdk_bdev_virtio.so 00:02:04.592 LIB libspdk_bdev_raid.a 00:02:04.592 SO libspdk_bdev_raid.so.6.0 00:02:04.592 SYMLINK libspdk_bdev_raid.so 00:02:05.535 LIB libspdk_bdev_nvme.a 00:02:05.535 SO libspdk_bdev_nvme.so.7.0 00:02:05.796 SYMLINK libspdk_bdev_nvme.so 00:02:06.367 CC module/event/subsystems/sock/sock.o 00:02:06.367 CC module/event/subsystems/vmd/vmd.o 00:02:06.367 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:06.367 CC module/event/subsystems/iobuf/iobuf.o 00:02:06.367 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:06.367 CC module/event/subsystems/scheduler/scheduler.o 00:02:06.367 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:06.367 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:06.367 CC module/event/subsystems/keyring/keyring.o 00:02:06.629 LIB libspdk_event_sock.a 00:02:06.629 LIB libspdk_event_vmd.a 00:02:06.629 LIB libspdk_event_vhost_blk.a 00:02:06.629 LIB libspdk_event_keyring.a 00:02:06.629 LIB libspdk_event_scheduler.a 00:02:06.629 LIB libspdk_event_vfu_tgt.a 00:02:06.629 LIB libspdk_event_iobuf.a 00:02:06.629 SO libspdk_event_keyring.so.1.0 00:02:06.629 SO libspdk_event_sock.so.5.0 00:02:06.629 SO libspdk_event_scheduler.so.4.0 00:02:06.629 SO libspdk_event_vmd.so.6.0 00:02:06.629 SO libspdk_event_vhost_blk.so.3.0 00:02:06.629 SO libspdk_event_iobuf.so.3.0 00:02:06.629 SO libspdk_event_vfu_tgt.so.3.0 00:02:06.629 SYMLINK libspdk_event_keyring.so 00:02:06.629 SYMLINK libspdk_event_scheduler.so 00:02:06.629 SYMLINK libspdk_event_sock.so 00:02:06.629 SYMLINK libspdk_event_vhost_blk.so 00:02:06.629 SYMLINK libspdk_event_vmd.so 00:02:06.629 SYMLINK libspdk_event_vfu_tgt.so 00:02:06.890 SYMLINK libspdk_event_iobuf.so 00:02:07.150 CC module/event/subsystems/accel/accel.o 00:02:07.150 LIB libspdk_event_accel.a 00:02:07.438 SO libspdk_event_accel.so.6.0 00:02:07.438 SYMLINK libspdk_event_accel.so 00:02:07.767 CC module/event/subsystems/bdev/bdev.o 00:02:07.767 LIB libspdk_event_bdev.a 00:02:08.040 SO libspdk_event_bdev.so.6.0 00:02:08.040 SYMLINK libspdk_event_bdev.so 00:02:08.301 CC module/event/subsystems/scsi/scsi.o 00:02:08.301 CC module/event/subsystems/nbd/nbd.o 00:02:08.301 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:08.301 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:08.301 CC module/event/subsystems/ublk/ublk.o 00:02:08.562 LIB libspdk_event_nbd.a 00:02:08.563 LIB libspdk_event_ublk.a 00:02:08.563 LIB libspdk_event_scsi.a 00:02:08.563 SO libspdk_event_ublk.so.3.0 00:02:08.563 SO libspdk_event_nbd.so.6.0 00:02:08.563 LIB libspdk_event_nvmf.a 00:02:08.563 SO libspdk_event_scsi.so.6.0 00:02:08.563 SO libspdk_event_nvmf.so.6.0 00:02:08.563 SYMLINK libspdk_event_nbd.so 00:02:08.563 SYMLINK libspdk_event_ublk.so 00:02:08.563 SYMLINK libspdk_event_scsi.so 00:02:08.824 SYMLINK libspdk_event_nvmf.so 00:02:09.085 CC module/event/subsystems/iscsi/iscsi.o 00:02:09.085 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:09.085 LIB libspdk_event_vhost_scsi.a 00:02:09.085 LIB libspdk_event_iscsi.a 00:02:09.085 SO libspdk_event_vhost_scsi.so.3.0 00:02:09.085 SO libspdk_event_iscsi.so.6.0 00:02:09.345 SYMLINK libspdk_event_vhost_scsi.so 00:02:09.345 SYMLINK libspdk_event_iscsi.so 00:02:09.345 SO libspdk.so.6.0 00:02:09.345 SYMLINK libspdk.so 00:02:09.916 CXX app/trace/trace.o 00:02:09.916 CC app/trace_record/trace_record.o 00:02:09.916 CC app/spdk_nvme_identify/identify.o 00:02:09.916 CC app/spdk_top/spdk_top.o 00:02:09.916 CC app/spdk_nvme_perf/perf.o 00:02:09.916 CC app/spdk_lspci/spdk_lspci.o 00:02:09.916 CC app/spdk_nvme_discover/discovery_aer.o 00:02:09.916 TEST_HEADER include/spdk/accel.h 00:02:09.916 CC test/rpc_client/rpc_client_test.o 00:02:09.916 TEST_HEADER include/spdk/accel_module.h 00:02:09.916 TEST_HEADER include/spdk/assert.h 00:02:09.916 TEST_HEADER include/spdk/barrier.h 00:02:09.916 TEST_HEADER include/spdk/base64.h 00:02:09.916 TEST_HEADER include/spdk/bdev.h 00:02:09.916 TEST_HEADER include/spdk/bdev_module.h 00:02:09.916 TEST_HEADER include/spdk/bdev_zone.h 00:02:09.916 TEST_HEADER include/spdk/bit_array.h 00:02:09.916 TEST_HEADER include/spdk/bit_pool.h 00:02:09.916 TEST_HEADER include/spdk/blob_bdev.h 00:02:09.916 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:09.916 TEST_HEADER include/spdk/blobfs.h 00:02:09.916 TEST_HEADER include/spdk/conf.h 00:02:09.916 TEST_HEADER include/spdk/blob.h 00:02:09.916 TEST_HEADER include/spdk/config.h 00:02:09.916 TEST_HEADER include/spdk/cpuset.h 00:02:09.916 TEST_HEADER include/spdk/crc16.h 00:02:09.916 TEST_HEADER include/spdk/crc32.h 00:02:09.916 TEST_HEADER include/spdk/crc64.h 00:02:09.916 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:09.916 TEST_HEADER include/spdk/dif.h 00:02:09.916 TEST_HEADER include/spdk/dma.h 00:02:09.916 TEST_HEADER include/spdk/env_dpdk.h 00:02:09.916 TEST_HEADER include/spdk/endian.h 00:02:09.916 TEST_HEADER include/spdk/env.h 00:02:09.916 TEST_HEADER include/spdk/event.h 00:02:09.916 TEST_HEADER include/spdk/fd_group.h 00:02:09.916 TEST_HEADER include/spdk/fd.h 00:02:09.916 TEST_HEADER include/spdk/file.h 00:02:09.916 CC app/spdk_dd/spdk_dd.o 00:02:09.916 TEST_HEADER include/spdk/gpt_spec.h 00:02:09.916 TEST_HEADER include/spdk/ftl.h 00:02:09.916 CC app/nvmf_tgt/nvmf_main.o 00:02:09.916 TEST_HEADER include/spdk/hexlify.h 00:02:09.916 TEST_HEADER include/spdk/histogram_data.h 00:02:09.916 TEST_HEADER include/spdk/idxd.h 00:02:09.916 CC app/iscsi_tgt/iscsi_tgt.o 00:02:09.916 TEST_HEADER include/spdk/idxd_spec.h 00:02:09.916 TEST_HEADER include/spdk/init.h 00:02:09.916 TEST_HEADER include/spdk/ioat_spec.h 00:02:09.916 TEST_HEADER include/spdk/ioat.h 00:02:09.916 TEST_HEADER include/spdk/iscsi_spec.h 00:02:09.916 TEST_HEADER include/spdk/json.h 00:02:09.916 TEST_HEADER include/spdk/jsonrpc.h 00:02:09.916 TEST_HEADER include/spdk/keyring.h 00:02:09.916 TEST_HEADER include/spdk/keyring_module.h 00:02:09.916 CC app/spdk_tgt/spdk_tgt.o 00:02:09.916 TEST_HEADER include/spdk/log.h 00:02:09.916 TEST_HEADER include/spdk/likely.h 00:02:09.916 TEST_HEADER include/spdk/memory.h 00:02:09.916 TEST_HEADER include/spdk/lvol.h 00:02:09.916 TEST_HEADER include/spdk/mmio.h 00:02:09.916 TEST_HEADER include/spdk/nbd.h 00:02:09.916 TEST_HEADER include/spdk/net.h 00:02:09.916 TEST_HEADER include/spdk/nvme.h 00:02:09.916 TEST_HEADER include/spdk/nvme_intel.h 00:02:09.916 TEST_HEADER include/spdk/notify.h 00:02:09.916 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:09.916 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:09.916 TEST_HEADER include/spdk/nvme_spec.h 00:02:09.916 TEST_HEADER include/spdk/nvme_zns.h 00:02:09.916 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:09.916 TEST_HEADER include/spdk/nvmf.h 00:02:09.916 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:09.916 TEST_HEADER include/spdk/nvmf_spec.h 00:02:09.916 TEST_HEADER include/spdk/nvmf_transport.h 00:02:09.916 TEST_HEADER include/spdk/pci_ids.h 00:02:09.916 TEST_HEADER include/spdk/opal.h 00:02:09.916 TEST_HEADER include/spdk/opal_spec.h 00:02:09.916 TEST_HEADER include/spdk/pipe.h 00:02:09.916 TEST_HEADER include/spdk/queue.h 00:02:09.916 TEST_HEADER include/spdk/rpc.h 00:02:09.916 TEST_HEADER include/spdk/reduce.h 00:02:09.916 TEST_HEADER include/spdk/scheduler.h 00:02:09.916 TEST_HEADER include/spdk/scsi.h 00:02:09.916 TEST_HEADER include/spdk/scsi_spec.h 00:02:09.916 TEST_HEADER include/spdk/stdinc.h 00:02:09.916 TEST_HEADER include/spdk/sock.h 00:02:09.916 TEST_HEADER include/spdk/thread.h 00:02:09.916 TEST_HEADER include/spdk/string.h 00:02:09.916 TEST_HEADER include/spdk/trace.h 00:02:09.916 TEST_HEADER include/spdk/trace_parser.h 00:02:09.916 TEST_HEADER include/spdk/tree.h 00:02:09.916 TEST_HEADER include/spdk/ublk.h 00:02:09.916 TEST_HEADER include/spdk/util.h 00:02:09.916 TEST_HEADER include/spdk/uuid.h 00:02:09.916 TEST_HEADER include/spdk/version.h 00:02:09.916 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:09.916 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:09.916 TEST_HEADER include/spdk/vhost.h 00:02:09.916 TEST_HEADER include/spdk/vmd.h 00:02:09.916 TEST_HEADER include/spdk/zipf.h 00:02:09.916 TEST_HEADER include/spdk/xor.h 00:02:09.916 CXX test/cpp_headers/accel_module.o 00:02:09.916 CXX test/cpp_headers/accel.o 00:02:09.916 CXX test/cpp_headers/assert.o 00:02:09.916 CXX test/cpp_headers/barrier.o 00:02:09.916 CXX test/cpp_headers/base64.o 00:02:09.916 CXX test/cpp_headers/bdev.o 00:02:09.916 CXX test/cpp_headers/bdev_module.o 00:02:09.916 CXX test/cpp_headers/bit_array.o 00:02:09.916 CXX test/cpp_headers/bdev_zone.o 00:02:09.916 CXX test/cpp_headers/bit_pool.o 00:02:09.916 CXX test/cpp_headers/blob_bdev.o 00:02:09.916 CXX test/cpp_headers/blobfs.o 00:02:09.916 CXX test/cpp_headers/blobfs_bdev.o 00:02:09.916 CXX test/cpp_headers/blob.o 00:02:09.916 CXX test/cpp_headers/config.o 00:02:09.916 CXX test/cpp_headers/conf.o 00:02:09.916 CXX test/cpp_headers/cpuset.o 00:02:09.916 CXX test/cpp_headers/crc64.o 00:02:09.916 CXX test/cpp_headers/crc16.o 00:02:09.916 CXX test/cpp_headers/crc32.o 00:02:09.916 CXX test/cpp_headers/endian.o 00:02:09.916 CXX test/cpp_headers/dif.o 00:02:09.916 CXX test/cpp_headers/dma.o 00:02:09.916 CXX test/cpp_headers/env_dpdk.o 00:02:09.916 CXX test/cpp_headers/env.o 00:02:09.916 CXX test/cpp_headers/event.o 00:02:09.916 CXX test/cpp_headers/fd.o 00:02:10.177 CXX test/cpp_headers/fd_group.o 00:02:10.177 CXX test/cpp_headers/file.o 00:02:10.177 CXX test/cpp_headers/gpt_spec.o 00:02:10.177 CXX test/cpp_headers/ftl.o 00:02:10.177 CXX test/cpp_headers/idxd.o 00:02:10.177 CXX test/cpp_headers/hexlify.o 00:02:10.177 CXX test/cpp_headers/idxd_spec.o 00:02:10.177 CC examples/ioat/perf/perf.o 00:02:10.177 CXX test/cpp_headers/init.o 00:02:10.177 CXX test/cpp_headers/histogram_data.o 00:02:10.177 LINK spdk_lspci 00:02:10.177 CXX test/cpp_headers/ioat_spec.o 00:02:10.177 CXX test/cpp_headers/iscsi_spec.o 00:02:10.177 CXX test/cpp_headers/ioat.o 00:02:10.177 CXX test/cpp_headers/json.o 00:02:10.177 CXX test/cpp_headers/keyring_module.o 00:02:10.177 CXX test/cpp_headers/jsonrpc.o 00:02:10.177 CXX test/cpp_headers/likely.o 00:02:10.177 CXX test/cpp_headers/keyring.o 00:02:10.177 CXX test/cpp_headers/lvol.o 00:02:10.177 CXX test/cpp_headers/log.o 00:02:10.177 CXX test/cpp_headers/memory.o 00:02:10.177 CC examples/util/zipf/zipf.o 00:02:10.177 CXX test/cpp_headers/net.o 00:02:10.177 CXX test/cpp_headers/mmio.o 00:02:10.177 CXX test/cpp_headers/nbd.o 00:02:10.177 CXX test/cpp_headers/notify.o 00:02:10.177 CC test/env/vtophys/vtophys.o 00:02:10.177 CXX test/cpp_headers/nvme.o 00:02:10.177 CC test/thread/poller_perf/poller_perf.o 00:02:10.177 CXX test/cpp_headers/nvme_intel.o 00:02:10.177 CC examples/ioat/verify/verify.o 00:02:10.177 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:10.177 CXX test/cpp_headers/nvme_ocssd.o 00:02:10.177 CC test/env/memory/memory_ut.o 00:02:10.177 CXX test/cpp_headers/nvme_spec.o 00:02:10.177 CXX test/cpp_headers/nvme_zns.o 00:02:10.177 CXX test/cpp_headers/nvmf_cmd.o 00:02:10.177 CC test/env/pci/pci_ut.o 00:02:10.177 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:10.177 CXX test/cpp_headers/nvmf_spec.o 00:02:10.177 CXX test/cpp_headers/nvmf.o 00:02:10.177 CC test/app/jsoncat/jsoncat.o 00:02:10.177 CXX test/cpp_headers/nvmf_transport.o 00:02:10.177 CXX test/cpp_headers/opal_spec.o 00:02:10.177 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:10.177 CXX test/cpp_headers/opal.o 00:02:10.177 CXX test/cpp_headers/pci_ids.o 00:02:10.177 CXX test/cpp_headers/reduce.o 00:02:10.177 CXX test/cpp_headers/pipe.o 00:02:10.177 CXX test/cpp_headers/queue.o 00:02:10.177 CXX test/cpp_headers/scsi_spec.o 00:02:10.177 CXX test/cpp_headers/scheduler.o 00:02:10.177 CXX test/cpp_headers/rpc.o 00:02:10.177 CXX test/cpp_headers/scsi.o 00:02:10.177 CXX test/cpp_headers/string.o 00:02:10.177 CXX test/cpp_headers/sock.o 00:02:10.177 CXX test/cpp_headers/stdinc.o 00:02:10.177 CC test/app/stub/stub.o 00:02:10.177 CXX test/cpp_headers/thread.o 00:02:10.177 CXX test/cpp_headers/trace_parser.o 00:02:10.177 CXX test/cpp_headers/trace.o 00:02:10.177 CC test/app/histogram_perf/histogram_perf.o 00:02:10.177 CC app/fio/nvme/fio_plugin.o 00:02:10.177 CXX test/cpp_headers/tree.o 00:02:10.177 CXX test/cpp_headers/ublk.o 00:02:10.177 CXX test/cpp_headers/uuid.o 00:02:10.177 CXX test/cpp_headers/util.o 00:02:10.177 CXX test/cpp_headers/vfio_user_pci.o 00:02:10.177 CXX test/cpp_headers/version.o 00:02:10.177 CXX test/cpp_headers/vfio_user_spec.o 00:02:10.177 CXX test/cpp_headers/xor.o 00:02:10.177 CXX test/cpp_headers/vhost.o 00:02:10.177 CXX test/cpp_headers/vmd.o 00:02:10.177 CXX test/cpp_headers/zipf.o 00:02:10.177 CC app/fio/bdev/fio_plugin.o 00:02:10.177 LINK rpc_client_test 00:02:10.177 CC test/dma/test_dma/test_dma.o 00:02:10.177 LINK spdk_trace_record 00:02:10.177 CC test/app/bdev_svc/bdev_svc.o 00:02:10.177 LINK spdk_nvme_discover 00:02:10.177 LINK interrupt_tgt 00:02:10.437 LINK nvmf_tgt 00:02:10.437 LINK iscsi_tgt 00:02:10.437 CC test/env/mem_callbacks/mem_callbacks.o 00:02:10.437 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:10.437 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:10.437 LINK spdk_tgt 00:02:10.437 LINK zipf 00:02:10.437 LINK vtophys 00:02:10.437 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:10.437 LINK jsoncat 00:02:10.437 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:10.696 LINK spdk_trace 00:02:10.696 LINK spdk_dd 00:02:10.696 LINK bdev_svc 00:02:10.696 LINK ioat_perf 00:02:10.696 LINK poller_perf 00:02:10.696 LINK stub 00:02:10.696 LINK histogram_perf 00:02:10.696 LINK env_dpdk_post_init 00:02:10.696 LINK verify 00:02:10.956 LINK test_dma 00:02:10.956 LINK spdk_nvme_perf 00:02:10.956 CC app/vhost/vhost.o 00:02:10.956 CC examples/vmd/led/led.o 00:02:10.956 CC examples/sock/hello_world/hello_sock.o 00:02:10.956 LINK spdk_nvme 00:02:10.956 CC examples/thread/thread/thread_ex.o 00:02:10.956 CC examples/vmd/lsvmd/lsvmd.o 00:02:10.956 CC examples/idxd/perf/perf.o 00:02:10.956 LINK pci_ut 00:02:10.956 LINK nvme_fuzz 00:02:10.956 LINK vhost_fuzz 00:02:11.217 LINK spdk_bdev 00:02:11.217 LINK led 00:02:11.217 LINK vhost 00:02:11.217 LINK spdk_nvme_identify 00:02:11.217 CC test/event/reactor/reactor.o 00:02:11.217 LINK lsvmd 00:02:11.217 CC test/event/event_perf/event_perf.o 00:02:11.217 LINK spdk_top 00:02:11.217 CC test/event/reactor_perf/reactor_perf.o 00:02:11.217 LINK mem_callbacks 00:02:11.217 LINK hello_sock 00:02:11.217 CC test/event/app_repeat/app_repeat.o 00:02:11.217 CC test/event/scheduler/scheduler.o 00:02:11.217 LINK thread 00:02:11.217 LINK idxd_perf 00:02:11.477 LINK reactor 00:02:11.477 LINK event_perf 00:02:11.477 LINK reactor_perf 00:02:11.477 CC test/nvme/aer/aer.o 00:02:11.477 LINK memory_ut 00:02:11.477 CC test/nvme/sgl/sgl.o 00:02:11.477 LINK app_repeat 00:02:11.477 CC test/nvme/fdp/fdp.o 00:02:11.477 CC test/nvme/err_injection/err_injection.o 00:02:11.477 CC test/nvme/reset/reset.o 00:02:11.477 CC test/nvme/fused_ordering/fused_ordering.o 00:02:11.477 CC test/nvme/connect_stress/connect_stress.o 00:02:11.477 CC test/nvme/boot_partition/boot_partition.o 00:02:11.477 CC test/nvme/cuse/cuse.o 00:02:11.477 CC test/nvme/compliance/nvme_compliance.o 00:02:11.477 CC test/nvme/e2edp/nvme_dp.o 00:02:11.477 CC test/accel/dif/dif.o 00:02:11.477 CC test/blobfs/mkfs/mkfs.o 00:02:11.477 CC test/nvme/simple_copy/simple_copy.o 00:02:11.477 CC test/nvme/startup/startup.o 00:02:11.477 CC test/nvme/overhead/overhead.o 00:02:11.477 CC test/nvme/reserve/reserve.o 00:02:11.477 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:11.477 LINK scheduler 00:02:11.477 CC test/lvol/esnap/esnap.o 00:02:11.477 LINK boot_partition 00:02:11.736 LINK doorbell_aers 00:02:11.736 LINK fused_ordering 00:02:11.736 LINK startup 00:02:11.736 LINK err_injection 00:02:11.736 LINK connect_stress 00:02:11.736 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:11.736 LINK reset 00:02:11.736 CC examples/nvme/hotplug/hotplug.o 00:02:11.737 CC examples/nvme/hello_world/hello_world.o 00:02:11.737 CC examples/nvme/abort/abort.o 00:02:11.737 LINK mkfs 00:02:11.737 LINK reserve 00:02:11.737 CC examples/nvme/arbitration/arbitration.o 00:02:11.737 LINK aer 00:02:11.737 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:11.737 CC examples/nvme/reconnect/reconnect.o 00:02:11.737 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:11.737 LINK simple_copy 00:02:11.737 LINK sgl 00:02:11.737 LINK nvme_dp 00:02:11.737 LINK overhead 00:02:11.737 LINK fdp 00:02:11.737 LINK nvme_compliance 00:02:11.737 CC examples/accel/perf/accel_perf.o 00:02:11.737 CC examples/blob/cli/blobcli.o 00:02:11.737 LINK pmr_persistence 00:02:11.737 CC examples/blob/hello_world/hello_blob.o 00:02:11.996 LINK dif 00:02:11.996 LINK hello_world 00:02:11.996 LINK cmb_copy 00:02:11.996 LINK hotplug 00:02:11.996 LINK abort 00:02:11.996 LINK arbitration 00:02:11.996 LINK reconnect 00:02:11.996 LINK iscsi_fuzz 00:02:11.996 LINK nvme_manage 00:02:11.996 LINK hello_blob 00:02:12.257 LINK accel_perf 00:02:12.257 LINK blobcli 00:02:12.517 CC test/bdev/bdevio/bdevio.o 00:02:12.517 LINK cuse 00:02:12.778 CC examples/bdev/hello_world/hello_bdev.o 00:02:12.778 CC examples/bdev/bdevperf/bdevperf.o 00:02:12.778 LINK bdevio 00:02:13.037 LINK hello_bdev 00:02:13.625 LINK bdevperf 00:02:14.195 CC examples/nvmf/nvmf/nvmf.o 00:02:14.456 LINK nvmf 00:02:15.838 LINK esnap 00:02:16.100 00:02:16.100 real 0m51.116s 00:02:16.100 user 6m33.411s 00:02:16.100 sys 4m10.689s 00:02:16.100 20:52:43 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:16.100 20:52:43 make -- common/autotest_common.sh@10 -- $ set +x 00:02:16.100 ************************************ 00:02:16.100 END TEST make 00:02:16.100 ************************************ 00:02:16.100 20:52:43 -- common/autotest_common.sh@1142 -- $ return 0 00:02:16.100 20:52:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:16.100 20:52:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:16.100 20:52:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:16.100 20:52:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.100 20:52:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:16.100 20:52:43 -- pm/common@44 -- $ pid=1605812 00:02:16.100 20:52:43 -- pm/common@50 -- $ kill -TERM 1605812 00:02:16.100 20:52:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.100 20:52:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:16.100 20:52:43 -- pm/common@44 -- $ pid=1605813 00:02:16.100 20:52:43 -- pm/common@50 -- $ kill -TERM 1605813 00:02:16.100 20:52:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.100 20:52:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:16.100 20:52:43 -- pm/common@44 -- $ pid=1605815 00:02:16.100 20:52:43 -- pm/common@50 -- $ kill -TERM 1605815 00:02:16.100 20:52:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.100 20:52:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:16.100 20:52:43 -- pm/common@44 -- $ pid=1605839 00:02:16.100 20:52:43 -- pm/common@50 -- $ sudo -E kill -TERM 1605839 00:02:16.360 20:52:43 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:16.360 20:52:43 -- nvmf/common.sh@7 -- # uname -s 00:02:16.360 20:52:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:16.360 20:52:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:16.360 20:52:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:16.360 20:52:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:16.360 20:52:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:16.360 20:52:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:16.360 20:52:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:16.360 20:52:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:16.360 20:52:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:16.360 20:52:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:16.360 20:52:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:16.360 20:52:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:16.360 20:52:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:16.360 20:52:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:16.360 20:52:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:16.360 20:52:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:16.360 20:52:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:16.360 20:52:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:16.360 20:52:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:16.360 20:52:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:16.360 20:52:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.360 20:52:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.360 20:52:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.360 20:52:43 -- paths/export.sh@5 -- # export PATH 00:02:16.360 20:52:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.360 20:52:43 -- nvmf/common.sh@47 -- # : 0 00:02:16.360 20:52:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:16.360 20:52:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:16.360 20:52:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:16.360 20:52:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:16.360 20:52:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:16.360 20:52:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:16.360 20:52:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:16.360 20:52:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:16.360 20:52:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:16.360 20:52:43 -- spdk/autotest.sh@32 -- # uname -s 00:02:16.360 20:52:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:16.360 20:52:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:16.360 20:52:43 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:16.360 20:52:43 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:16.360 20:52:43 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:16.360 20:52:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:16.360 20:52:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:16.360 20:52:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:16.360 20:52:43 -- spdk/autotest.sh@48 -- # udevadm_pid=1668916 00:02:16.360 20:52:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:16.360 20:52:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:16.360 20:52:43 -- pm/common@17 -- # local monitor 00:02:16.360 20:52:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.360 20:52:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.360 20:52:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.360 20:52:43 -- pm/common@21 -- # date +%s 00:02:16.360 20:52:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.360 20:52:43 -- pm/common@21 -- # date +%s 00:02:16.360 20:52:43 -- pm/common@25 -- # sleep 1 00:02:16.360 20:52:43 -- pm/common@21 -- # date +%s 00:02:16.360 20:52:43 -- pm/common@21 -- # date +%s 00:02:16.360 20:52:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721069563 00:02:16.360 20:52:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721069563 00:02:16.360 20:52:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721069563 00:02:16.360 20:52:43 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721069563 00:02:16.360 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721069563_collect-vmstat.pm.log 00:02:16.360 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721069563_collect-cpu-load.pm.log 00:02:16.360 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721069563_collect-cpu-temp.pm.log 00:02:16.360 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721069563_collect-bmc-pm.bmc.pm.log 00:02:17.304 20:52:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:17.304 20:52:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:17.304 20:52:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:17.304 20:52:44 -- common/autotest_common.sh@10 -- # set +x 00:02:17.304 20:52:44 -- spdk/autotest.sh@59 -- # create_test_list 00:02:17.304 20:52:44 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:17.304 20:52:44 -- common/autotest_common.sh@10 -- # set +x 00:02:17.304 20:52:44 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:17.304 20:52:44 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:17.304 20:52:44 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:17.304 20:52:44 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:17.304 20:52:44 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:17.304 20:52:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:17.304 20:52:44 -- common/autotest_common.sh@1455 -- # uname 00:02:17.565 20:52:44 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:17.566 20:52:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:17.566 20:52:44 -- common/autotest_common.sh@1475 -- # uname 00:02:17.566 20:52:44 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:17.566 20:52:44 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:17.566 20:52:44 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:17.566 20:52:44 -- spdk/autotest.sh@72 -- # hash lcov 00:02:17.566 20:52:44 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:17.566 20:52:44 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:17.566 --rc lcov_branch_coverage=1 00:02:17.566 --rc lcov_function_coverage=1 00:02:17.566 --rc genhtml_branch_coverage=1 00:02:17.566 --rc genhtml_function_coverage=1 00:02:17.566 --rc genhtml_legend=1 00:02:17.566 --rc geninfo_all_blocks=1 00:02:17.566 ' 00:02:17.566 20:52:44 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:17.566 --rc lcov_branch_coverage=1 00:02:17.566 --rc lcov_function_coverage=1 00:02:17.566 --rc genhtml_branch_coverage=1 00:02:17.566 --rc genhtml_function_coverage=1 00:02:17.566 --rc genhtml_legend=1 00:02:17.566 --rc geninfo_all_blocks=1 00:02:17.566 ' 00:02:17.566 20:52:44 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:17.566 --rc lcov_branch_coverage=1 00:02:17.566 --rc lcov_function_coverage=1 00:02:17.566 --rc genhtml_branch_coverage=1 00:02:17.566 --rc genhtml_function_coverage=1 00:02:17.566 --rc genhtml_legend=1 00:02:17.566 --rc geninfo_all_blocks=1 00:02:17.566 --no-external' 00:02:17.566 20:52:44 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:17.566 --rc lcov_branch_coverage=1 00:02:17.566 --rc lcov_function_coverage=1 00:02:17.566 --rc genhtml_branch_coverage=1 00:02:17.566 --rc genhtml_function_coverage=1 00:02:17.566 --rc genhtml_legend=1 00:02:17.566 --rc geninfo_all_blocks=1 00:02:17.566 --no-external' 00:02:17.566 20:52:44 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:17.566 lcov: LCOV version 1.14 00:02:17.566 20:52:44 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:22.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:22.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:22.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:22.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:22.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:22.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:22.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:22.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:22.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:22.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:22.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:22.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:22.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:22.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:22.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:22.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:40.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:40.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:47.566 20:53:13 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:47.566 20:53:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:47.566 20:53:13 -- common/autotest_common.sh@10 -- # set +x 00:02:47.566 20:53:13 -- spdk/autotest.sh@91 -- # rm -f 00:02:47.566 20:53:13 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:50.891 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:50.891 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:50.891 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:50.891 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:50.891 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:50.891 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:50.891 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:50.891 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:50.891 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:50.891 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:50.891 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:50.891 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:50.891 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:50.891 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:50.891 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:50.891 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:50.891 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:50.891 20:53:17 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:50.891 20:53:17 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:50.891 20:53:17 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:50.891 20:53:17 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:50.891 20:53:17 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:50.891 20:53:17 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:50.891 20:53:17 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:50.891 20:53:17 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:50.891 20:53:17 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:50.891 20:53:17 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:50.891 20:53:17 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:50.891 20:53:17 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:50.891 20:53:17 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:50.891 20:53:17 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:50.891 20:53:17 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:50.891 No valid GPT data, bailing 00:02:50.891 20:53:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:50.892 20:53:18 -- scripts/common.sh@391 -- # pt= 00:02:50.892 20:53:18 -- scripts/common.sh@392 -- # return 1 00:02:50.892 20:53:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:50.892 1+0 records in 00:02:50.892 1+0 records out 00:02:50.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041852 s, 251 MB/s 00:02:50.892 20:53:18 -- spdk/autotest.sh@118 -- # sync 00:02:50.892 20:53:18 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:50.892 20:53:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:50.892 20:53:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:59.036 20:53:26 -- spdk/autotest.sh@124 -- # uname -s 00:02:59.036 20:53:26 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:59.036 20:53:26 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:59.036 20:53:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:59.036 20:53:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:59.036 20:53:26 -- common/autotest_common.sh@10 -- # set +x 00:02:59.036 ************************************ 00:02:59.036 START TEST setup.sh 00:02:59.036 ************************************ 00:02:59.036 20:53:26 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:59.036 * Looking for test storage... 00:02:59.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:59.036 20:53:26 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:59.036 20:53:26 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:59.036 20:53:26 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:59.036 20:53:26 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:59.036 20:53:26 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:59.036 20:53:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:59.298 ************************************ 00:02:59.298 START TEST acl 00:02:59.298 ************************************ 00:02:59.298 20:53:26 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:59.298 * Looking for test storage... 00:02:59.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:59.298 20:53:26 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:59.298 20:53:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:59.298 20:53:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:59.298 20:53:26 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:59.298 20:53:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:59.298 20:53:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:59.298 20:53:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:59.298 20:53:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:59.298 20:53:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:59.298 20:53:26 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:59.298 20:53:26 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:59.298 20:53:26 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:59.298 20:53:26 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:59.298 20:53:26 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:59.298 20:53:26 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:59.298 20:53:26 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:03.498 20:53:30 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:03.498 20:53:30 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:03.498 20:53:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.498 20:53:30 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:03.498 20:53:30 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.498 20:53:30 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:07.705 Hugepages 00:03:07.705 node hugesize free / total 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.705 00:03:07.705 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:07.705 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:07.706 20:53:34 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:07.706 20:53:34 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:07.706 20:53:34 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:07.706 20:53:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:07.706 ************************************ 00:03:07.706 START TEST denied 00:03:07.706 ************************************ 00:03:07.706 20:53:34 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:07.706 20:53:34 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:07.706 20:53:34 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:07.706 20:53:34 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:07.706 20:53:34 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.706 20:53:34 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:11.073 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:11.073 20:53:38 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:11.073 20:53:38 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:11.073 20:53:38 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:11.073 20:53:38 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:11.073 20:53:38 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:11.073 20:53:38 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:11.073 20:53:38 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:11.073 20:53:38 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:11.073 20:53:38 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:11.073 20:53:38 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:16.365 00:03:16.365 real 0m8.131s 00:03:16.365 user 0m2.597s 00:03:16.365 sys 0m4.653s 00:03:16.365 20:53:42 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:16.365 20:53:42 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:16.365 ************************************ 00:03:16.365 END TEST denied 00:03:16.365 ************************************ 00:03:16.365 20:53:42 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:16.365 20:53:42 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:16.365 20:53:42 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:16.365 20:53:42 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.365 20:53:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:16.365 ************************************ 00:03:16.365 START TEST allowed 00:03:16.365 ************************************ 00:03:16.365 20:53:42 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:16.366 20:53:42 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:16.366 20:53:42 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:16.366 20:53:42 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:16.366 20:53:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.366 20:53:42 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:21.659 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:21.659 20:53:48 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:21.659 20:53:48 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:21.659 20:53:48 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:21.659 20:53:48 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:21.659 20:53:48 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:25.866 00:03:25.866 real 0m9.887s 00:03:25.866 user 0m2.932s 00:03:25.866 sys 0m5.278s 00:03:25.866 20:53:52 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.866 20:53:52 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:25.866 ************************************ 00:03:25.866 END TEST allowed 00:03:25.866 ************************************ 00:03:25.866 20:53:52 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:25.866 00:03:25.866 real 0m26.267s 00:03:25.866 user 0m8.573s 00:03:25.866 sys 0m15.352s 00:03:25.866 20:53:52 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.866 20:53:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:25.866 ************************************ 00:03:25.866 END TEST acl 00:03:25.866 ************************************ 00:03:25.866 20:53:52 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:25.866 20:53:52 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:25.866 20:53:52 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.866 20:53:52 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.866 20:53:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:25.867 ************************************ 00:03:25.867 START TEST hugepages 00:03:25.867 ************************************ 00:03:25.867 20:53:52 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:25.867 * Looking for test storage... 00:03:25.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106516184 kB' 'MemAvailable: 110249648 kB' 'Buffers: 4132 kB' 'Cached: 10615012 kB' 'SwapCached: 0 kB' 'Active: 7560936 kB' 'Inactive: 3701232 kB' 'Active(anon): 7069504 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646396 kB' 'Mapped: 165168 kB' 'Shmem: 6426480 kB' 'KReclaimable: 581744 kB' 'Slab: 1461448 kB' 'SReclaimable: 581744 kB' 'SUnreclaim: 879704 kB' 'KernelStack: 27872 kB' 'PageTables: 9108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460876 kB' 'Committed_AS: 8680172 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237692 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.867 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:25.868 20:53:52 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:25.868 20:53:52 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.868 20:53:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.868 20:53:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:25.868 ************************************ 00:03:25.868 START TEST default_setup 00:03:25.868 ************************************ 00:03:25.868 20:53:52 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:25.868 20:53:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:25.868 20:53:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:25.868 20:53:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:25.868 20:53:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:25.868 20:53:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:25.868 20:53:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:25.868 20:53:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.869 20:53:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:25.869 20:53:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:25.869 20:53:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:25.869 20:53:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.869 20:53:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.869 20:53:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.869 20:53:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.869 20:53:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.869 20:53:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:25.869 20:53:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:25.869 20:53:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:25.869 20:53:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:25.869 20:53:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:25.869 20:53:52 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.869 20:53:52 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.110 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:30.110 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:30.110 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:30.110 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:30.110 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:30.110 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:30.110 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:30.110 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:30.110 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:30.110 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:30.110 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:30.110 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:30.110 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:30.110 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:30.110 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:30.110 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:30.110 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:30.110 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:30.110 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:30.110 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108658456 kB' 'MemAvailable: 112391888 kB' 'Buffers: 4132 kB' 'Cached: 10615148 kB' 'SwapCached: 0 kB' 'Active: 7578220 kB' 'Inactive: 3701232 kB' 'Active(anon): 7086788 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663116 kB' 'Mapped: 165456 kB' 'Shmem: 6426616 kB' 'KReclaimable: 581712 kB' 'Slab: 1458628 kB' 'SReclaimable: 581712 kB' 'SUnreclaim: 876916 kB' 'KernelStack: 27904 kB' 'PageTables: 9016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8701380 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237804 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.111 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.112 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108660092 kB' 'MemAvailable: 112393524 kB' 'Buffers: 4132 kB' 'Cached: 10615152 kB' 'SwapCached: 0 kB' 'Active: 7578460 kB' 'Inactive: 3701232 kB' 'Active(anon): 7087028 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663432 kB' 'Mapped: 165532 kB' 'Shmem: 6426620 kB' 'KReclaimable: 581712 kB' 'Slab: 1458668 kB' 'SReclaimable: 581712 kB' 'SUnreclaim: 876956 kB' 'KernelStack: 27888 kB' 'PageTables: 9000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8701400 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237788 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.113 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.114 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108660644 kB' 'MemAvailable: 112394076 kB' 'Buffers: 4132 kB' 'Cached: 10615168 kB' 'SwapCached: 0 kB' 'Active: 7577256 kB' 'Inactive: 3701232 kB' 'Active(anon): 7085824 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662672 kB' 'Mapped: 165424 kB' 'Shmem: 6426636 kB' 'KReclaimable: 581712 kB' 'Slab: 1458700 kB' 'SReclaimable: 581712 kB' 'SUnreclaim: 876988 kB' 'KernelStack: 27872 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8701420 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237788 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.115 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.116 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.117 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.118 nr_hugepages=1024 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.118 resv_hugepages=0 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.118 surplus_hugepages=0 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.118 anon_hugepages=0 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108660908 kB' 'MemAvailable: 112394340 kB' 'Buffers: 4132 kB' 'Cached: 10615192 kB' 'SwapCached: 0 kB' 'Active: 7577340 kB' 'Inactive: 3701232 kB' 'Active(anon): 7085908 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662716 kB' 'Mapped: 165424 kB' 'Shmem: 6426660 kB' 'KReclaimable: 581712 kB' 'Slab: 1458700 kB' 'SReclaimable: 581712 kB' 'SUnreclaim: 876988 kB' 'KernelStack: 27888 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8701444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237788 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.118 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.119 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60004264 kB' 'MemUsed: 5654744 kB' 'SwapCached: 0 kB' 'Active: 1477720 kB' 'Inactive: 288448 kB' 'Active(anon): 1319972 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1607860 kB' 'Mapped: 36620 kB' 'AnonPages: 161680 kB' 'Shmem: 1161664 kB' 'KernelStack: 13368 kB' 'PageTables: 3596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 325436 kB' 'Slab: 745568 kB' 'SReclaimable: 325436 kB' 'SUnreclaim: 420132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.120 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.121 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:30.122 20:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.122 20:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.122 20:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.122 20:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.122 20:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:30.122 node0=1024 expecting 1024 00:03:30.122 20:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.122 00:03:30.122 real 0m4.116s 00:03:30.122 user 0m1.621s 00:03:30.122 sys 0m2.489s 00:03:30.122 20:53:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.122 20:53:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:30.122 ************************************ 00:03:30.122 END TEST default_setup 00:03:30.122 ************************************ 00:03:30.122 20:53:57 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:30.122 20:53:57 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:30.122 20:53:57 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.122 20:53:57 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.122 20:53:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.122 ************************************ 00:03:30.122 START TEST per_node_1G_alloc 00:03:30.122 ************************************ 00:03:30.122 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:30.122 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:30.122 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:30.122 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:30.122 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:30.122 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:30.122 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:30.122 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:30.122 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.122 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:30.122 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:30.122 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:30.122 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.122 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:30.122 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.122 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.123 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.123 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:30.123 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:30.123 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:30.123 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:30.123 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:30.123 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:30.123 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:30.123 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:30.123 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:30.123 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.123 20:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:34.334 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:34.334 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:34.334 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:34.334 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:34.334 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:34.334 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:34.334 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:34.334 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:34.334 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:34.334 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:34.334 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:34.334 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:34.334 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:34.334 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:34.334 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:34.334 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:34.334 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108651648 kB' 'MemAvailable: 112385080 kB' 'Buffers: 4132 kB' 'Cached: 10615308 kB' 'SwapCached: 0 kB' 'Active: 7581164 kB' 'Inactive: 3701232 kB' 'Active(anon): 7089732 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 666420 kB' 'Mapped: 165232 kB' 'Shmem: 6426776 kB' 'KReclaimable: 581712 kB' 'Slab: 1458032 kB' 'SReclaimable: 581712 kB' 'SUnreclaim: 876320 kB' 'KernelStack: 28176 kB' 'PageTables: 9552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8699604 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238028 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.334 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108640884 kB' 'MemAvailable: 112374316 kB' 'Buffers: 4132 kB' 'Cached: 10615312 kB' 'SwapCached: 0 kB' 'Active: 7589056 kB' 'Inactive: 3701232 kB' 'Active(anon): 7097624 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 674264 kB' 'Mapped: 165288 kB' 'Shmem: 6426780 kB' 'KReclaimable: 581712 kB' 'Slab: 1458028 kB' 'SReclaimable: 581712 kB' 'SUnreclaim: 876316 kB' 'KernelStack: 28160 kB' 'PageTables: 9672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8702928 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238016 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.335 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108644696 kB' 'MemAvailable: 112378128 kB' 'Buffers: 4132 kB' 'Cached: 10615332 kB' 'SwapCached: 0 kB' 'Active: 7582888 kB' 'Inactive: 3701232 kB' 'Active(anon): 7091456 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 668036 kB' 'Mapped: 165104 kB' 'Shmem: 6426800 kB' 'KReclaimable: 581712 kB' 'Slab: 1458096 kB' 'SReclaimable: 581712 kB' 'SUnreclaim: 876384 kB' 'KernelStack: 28032 kB' 'PageTables: 9452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8698440 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237984 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.336 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.337 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:34.338 nr_hugepages=1024 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.338 resv_hugepages=0 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.338 surplus_hugepages=0 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.338 anon_hugepages=0 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108645756 kB' 'MemAvailable: 112379188 kB' 'Buffers: 4132 kB' 'Cached: 10615332 kB' 'SwapCached: 0 kB' 'Active: 7578496 kB' 'Inactive: 3701232 kB' 'Active(anon): 7087064 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663252 kB' 'Mapped: 164600 kB' 'Shmem: 6426800 kB' 'KReclaimable: 581712 kB' 'Slab: 1458096 kB' 'SReclaimable: 581712 kB' 'SUnreclaim: 876384 kB' 'KernelStack: 28032 kB' 'PageTables: 9516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8693300 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238028 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.338 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 61064524 kB' 'MemUsed: 4594484 kB' 'SwapCached: 0 kB' 'Active: 1476532 kB' 'Inactive: 288448 kB' 'Active(anon): 1318784 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1607928 kB' 'Mapped: 36348 kB' 'AnonPages: 160248 kB' 'Shmem: 1161732 kB' 'KernelStack: 13320 kB' 'PageTables: 3304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 325436 kB' 'Slab: 745252 kB' 'SReclaimable: 325436 kB' 'SUnreclaim: 419816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.339 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 47573828 kB' 'MemUsed: 13106012 kB' 'SwapCached: 0 kB' 'Active: 6106420 kB' 'Inactive: 3412784 kB' 'Active(anon): 5772736 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3412784 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9011536 kB' 'Mapped: 128560 kB' 'AnonPages: 507876 kB' 'Shmem: 5265068 kB' 'KernelStack: 14552 kB' 'PageTables: 5388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 256276 kB' 'Slab: 712832 kB' 'SReclaimable: 256276 kB' 'SUnreclaim: 456556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.340 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:34.341 node0=512 expecting 512 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:34.341 node1=512 expecting 512 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:34.341 00:03:34.341 real 0m4.135s 00:03:34.341 user 0m1.610s 00:03:34.341 sys 0m2.594s 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.341 20:54:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:34.341 ************************************ 00:03:34.341 END TEST per_node_1G_alloc 00:03:34.341 ************************************ 00:03:34.341 20:54:01 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:34.341 20:54:01 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:34.341 20:54:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.341 20:54:01 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.341 20:54:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:34.341 ************************************ 00:03:34.341 START TEST even_2G_alloc 00:03:34.341 ************************************ 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:34.341 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:34.342 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.342 20:54:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.545 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:38.545 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:38.545 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:38.545 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:38.545 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:38.545 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:38.545 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:38.545 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:38.545 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:38.545 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:38.545 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:38.545 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:38.545 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:38.545 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:38.545 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:38.545 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:38.545 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108665944 kB' 'MemAvailable: 112399376 kB' 'Buffers: 4132 kB' 'Cached: 10615508 kB' 'SwapCached: 0 kB' 'Active: 7583732 kB' 'Inactive: 3701232 kB' 'Active(anon): 7092300 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 668608 kB' 'Mapped: 165048 kB' 'Shmem: 6426976 kB' 'KReclaimable: 581712 kB' 'Slab: 1457308 kB' 'SReclaimable: 581712 kB' 'SUnreclaim: 875596 kB' 'KernelStack: 27904 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8695180 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237888 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.545 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108665920 kB' 'MemAvailable: 112399352 kB' 'Buffers: 4132 kB' 'Cached: 10615512 kB' 'SwapCached: 0 kB' 'Active: 7583848 kB' 'Inactive: 3701232 kB' 'Active(anon): 7092416 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 668732 kB' 'Mapped: 165040 kB' 'Shmem: 6426980 kB' 'KReclaimable: 581712 kB' 'Slab: 1457308 kB' 'SReclaimable: 581712 kB' 'SUnreclaim: 875596 kB' 'KernelStack: 27888 kB' 'PageTables: 8932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8695196 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237888 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108667088 kB' 'MemAvailable: 112400520 kB' 'Buffers: 4132 kB' 'Cached: 10615528 kB' 'SwapCached: 0 kB' 'Active: 7583856 kB' 'Inactive: 3701232 kB' 'Active(anon): 7092424 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 668744 kB' 'Mapped: 165040 kB' 'Shmem: 6426996 kB' 'KReclaimable: 581712 kB' 'Slab: 1457340 kB' 'SReclaimable: 581712 kB' 'SUnreclaim: 875628 kB' 'KernelStack: 27904 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8695216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237888 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.549 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.550 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:38.551 nr_hugepages=1024 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.551 resv_hugepages=0 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.551 surplus_hugepages=0 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.551 anon_hugepages=0 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108668524 kB' 'MemAvailable: 112401956 kB' 'Buffers: 4132 kB' 'Cached: 10615552 kB' 'SwapCached: 0 kB' 'Active: 7583884 kB' 'Inactive: 3701232 kB' 'Active(anon): 7092452 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 668744 kB' 'Mapped: 165040 kB' 'Shmem: 6427020 kB' 'KReclaimable: 581712 kB' 'Slab: 1457340 kB' 'SReclaimable: 581712 kB' 'SUnreclaim: 875628 kB' 'KernelStack: 27904 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8695240 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237888 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.552 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 61094356 kB' 'MemUsed: 4564652 kB' 'SwapCached: 0 kB' 'Active: 1477092 kB' 'Inactive: 288448 kB' 'Active(anon): 1319344 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1608120 kB' 'Mapped: 36460 kB' 'AnonPages: 160576 kB' 'Shmem: 1161924 kB' 'KernelStack: 13336 kB' 'PageTables: 3308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 325436 kB' 'Slab: 744828 kB' 'SReclaimable: 325436 kB' 'SUnreclaim: 419392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.553 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.554 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 47574168 kB' 'MemUsed: 13105672 kB' 'SwapCached: 0 kB' 'Active: 6106808 kB' 'Inactive: 3412784 kB' 'Active(anon): 5773124 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3412784 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9011584 kB' 'Mapped: 128580 kB' 'AnonPages: 508164 kB' 'Shmem: 5265116 kB' 'KernelStack: 14568 kB' 'PageTables: 5664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 256276 kB' 'Slab: 712512 kB' 'SReclaimable: 256276 kB' 'SUnreclaim: 456236 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.555 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:38.556 node0=512 expecting 512 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:38.556 node1=512 expecting 512 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:38.556 00:03:38.556 real 0m4.088s 00:03:38.556 user 0m1.609s 00:03:38.556 sys 0m2.550s 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.556 20:54:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:38.556 ************************************ 00:03:38.556 END TEST even_2G_alloc 00:03:38.556 ************************************ 00:03:38.556 20:54:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:38.556 20:54:05 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:38.556 20:54:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:38.556 20:54:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.556 20:54:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:38.556 ************************************ 00:03:38.556 START TEST odd_alloc 00:03:38.556 ************************************ 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.556 20:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:42.769 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:42.769 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:42.769 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:42.769 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:42.769 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:42.769 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:42.769 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:42.769 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:42.769 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:42.769 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:42.769 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:42.769 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:42.769 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:42.769 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:42.769 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:42.769 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:42.769 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108661656 kB' 'MemAvailable: 112395088 kB' 'Buffers: 4132 kB' 'Cached: 10615688 kB' 'SwapCached: 0 kB' 'Active: 7581400 kB' 'Inactive: 3701232 kB' 'Active(anon): 7089968 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 666232 kB' 'Mapped: 164732 kB' 'Shmem: 6427156 kB' 'KReclaimable: 581712 kB' 'Slab: 1457492 kB' 'SReclaimable: 581712 kB' 'SUnreclaim: 875780 kB' 'KernelStack: 28032 kB' 'PageTables: 9252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8696152 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238108 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.769 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.770 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108667904 kB' 'MemAvailable: 112401336 kB' 'Buffers: 4132 kB' 'Cached: 10615688 kB' 'SwapCached: 0 kB' 'Active: 7582180 kB' 'Inactive: 3701232 kB' 'Active(anon): 7090748 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 666996 kB' 'Mapped: 164824 kB' 'Shmem: 6427156 kB' 'KReclaimable: 581712 kB' 'Slab: 1457476 kB' 'SReclaimable: 581712 kB' 'SUnreclaim: 875764 kB' 'KernelStack: 28160 kB' 'PageTables: 9304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8695912 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238076 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.771 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.772 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108669556 kB' 'MemAvailable: 112402988 kB' 'Buffers: 4132 kB' 'Cached: 10615704 kB' 'SwapCached: 0 kB' 'Active: 7585316 kB' 'Inactive: 3701232 kB' 'Active(anon): 7093884 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 669976 kB' 'Mapped: 164796 kB' 'Shmem: 6427172 kB' 'KReclaimable: 581712 kB' 'Slab: 1457524 kB' 'SReclaimable: 581712 kB' 'SUnreclaim: 875812 kB' 'KernelStack: 28016 kB' 'PageTables: 9180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8698192 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238032 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.773 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:42.774 nr_hugepages=1025 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.774 resv_hugepages=0 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.774 surplus_hugepages=0 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.774 anon_hugepages=0 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.774 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108668096 kB' 'MemAvailable: 112401528 kB' 'Buffers: 4132 kB' 'Cached: 10615724 kB' 'SwapCached: 0 kB' 'Active: 7586352 kB' 'Inactive: 3701232 kB' 'Active(anon): 7094920 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 670988 kB' 'Mapped: 165152 kB' 'Shmem: 6427192 kB' 'KReclaimable: 581712 kB' 'Slab: 1457428 kB' 'SReclaimable: 581712 kB' 'SUnreclaim: 875716 kB' 'KernelStack: 28176 kB' 'PageTables: 9480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8699924 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238096 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.775 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 61102288 kB' 'MemUsed: 4556720 kB' 'SwapCached: 0 kB' 'Active: 1477288 kB' 'Inactive: 288448 kB' 'Active(anon): 1319540 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1608208 kB' 'Mapped: 36544 kB' 'AnonPages: 160628 kB' 'Shmem: 1162012 kB' 'KernelStack: 13512 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 325436 kB' 'Slab: 744896 kB' 'SReclaimable: 325436 kB' 'SUnreclaim: 419460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.776 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.777 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 47565268 kB' 'MemUsed: 13114572 kB' 'SwapCached: 0 kB' 'Active: 6108996 kB' 'Inactive: 3412784 kB' 'Active(anon): 5775312 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3412784 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9011692 kB' 'Mapped: 128608 kB' 'AnonPages: 510236 kB' 'Shmem: 5265224 kB' 'KernelStack: 14536 kB' 'PageTables: 5528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 256276 kB' 'Slab: 712532 kB' 'SReclaimable: 256276 kB' 'SUnreclaim: 456256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.778 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:42.779 node0=512 expecting 513 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:42.779 node1=513 expecting 512 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:42.779 00:03:42.779 real 0m4.070s 00:03:42.779 user 0m1.629s 00:03:42.779 sys 0m2.511s 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.779 20:54:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:42.779 ************************************ 00:03:42.779 END TEST odd_alloc 00:03:42.779 ************************************ 00:03:42.779 20:54:09 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:42.779 20:54:09 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:42.779 20:54:09 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.779 20:54:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.779 20:54:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.779 ************************************ 00:03:42.779 START TEST custom_alloc 00:03:42.779 ************************************ 00:03:42.779 20:54:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:42.779 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:42.779 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:42.779 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:42.779 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:42.779 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:42.779 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:42.779 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:42.779 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:42.779 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.779 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:42.827 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.828 20:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:46.132 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:46.132 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:46.132 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:46.132 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:46.132 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:46.132 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:46.132 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:46.132 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:46.132 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:46.132 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:46.132 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:46.132 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:46.132 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:46.132 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:46.132 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:46.132 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:46.132 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.417 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107605016 kB' 'MemAvailable: 111338416 kB' 'Buffers: 4132 kB' 'Cached: 10615860 kB' 'SwapCached: 0 kB' 'Active: 7578164 kB' 'Inactive: 3701232 kB' 'Active(anon): 7086732 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662696 kB' 'Mapped: 164300 kB' 'Shmem: 6427328 kB' 'KReclaimable: 581680 kB' 'Slab: 1457560 kB' 'SReclaimable: 581680 kB' 'SUnreclaim: 875880 kB' 'KernelStack: 27840 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8687588 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237836 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.418 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107604764 kB' 'MemAvailable: 111338164 kB' 'Buffers: 4132 kB' 'Cached: 10615860 kB' 'SwapCached: 0 kB' 'Active: 7577348 kB' 'Inactive: 3701232 kB' 'Active(anon): 7085916 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661836 kB' 'Mapped: 164316 kB' 'Shmem: 6427328 kB' 'KReclaimable: 581680 kB' 'Slab: 1457588 kB' 'SReclaimable: 581680 kB' 'SUnreclaim: 875908 kB' 'KernelStack: 27792 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8687608 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237820 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.419 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107605080 kB' 'MemAvailable: 111338480 kB' 'Buffers: 4132 kB' 'Cached: 10615880 kB' 'SwapCached: 0 kB' 'Active: 7577332 kB' 'Inactive: 3701232 kB' 'Active(anon): 7085900 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661796 kB' 'Mapped: 164316 kB' 'Shmem: 6427348 kB' 'KReclaimable: 581680 kB' 'Slab: 1457588 kB' 'SReclaimable: 581680 kB' 'SUnreclaim: 875908 kB' 'KernelStack: 27776 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8687764 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237836 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.420 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:46.421 nr_hugepages=1536 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.421 resv_hugepages=0 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.421 surplus_hugepages=0 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.421 anon_hugepages=0 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107605780 kB' 'MemAvailable: 111339180 kB' 'Buffers: 4132 kB' 'Cached: 10615904 kB' 'SwapCached: 0 kB' 'Active: 7577348 kB' 'Inactive: 3701232 kB' 'Active(anon): 7085916 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661788 kB' 'Mapped: 164316 kB' 'Shmem: 6427372 kB' 'KReclaimable: 581680 kB' 'Slab: 1457588 kB' 'SReclaimable: 581680 kB' 'SUnreclaim: 875908 kB' 'KernelStack: 27776 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8687792 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237836 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.421 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.422 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 61087672 kB' 'MemUsed: 4571336 kB' 'SwapCached: 0 kB' 'Active: 1474824 kB' 'Inactive: 288448 kB' 'Active(anon): 1317076 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1608216 kB' 'Mapped: 35840 kB' 'AnonPages: 158148 kB' 'Shmem: 1162020 kB' 'KernelStack: 13272 kB' 'PageTables: 3212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 325404 kB' 'Slab: 744772 kB' 'SReclaimable: 325404 kB' 'SUnreclaim: 419368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:46.423 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 46523148 kB' 'MemUsed: 14156692 kB' 'SwapCached: 0 kB' 'Active: 6102836 kB' 'Inactive: 3412784 kB' 'Active(anon): 5769152 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3412784 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9011856 kB' 'Mapped: 128476 kB' 'AnonPages: 503964 kB' 'Shmem: 5265388 kB' 'KernelStack: 14472 kB' 'PageTables: 5160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 256276 kB' 'Slab: 712816 kB' 'SReclaimable: 256276 kB' 'SUnreclaim: 456540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.424 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.425 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.425 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.425 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.425 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.425 20:54:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.425 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.425 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.425 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.425 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.425 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:46.425 node0=512 expecting 512 00:03:46.425 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.425 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.425 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.425 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:46.425 node1=1024 expecting 1024 00:03:46.425 20:54:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:46.425 00:03:46.425 real 0m4.069s 00:03:46.425 user 0m1.539s 00:03:46.425 sys 0m2.592s 00:03:46.425 20:54:13 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.425 20:54:13 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:46.425 ************************************ 00:03:46.425 END TEST custom_alloc 00:03:46.425 ************************************ 00:03:46.685 20:54:13 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:46.685 20:54:13 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:46.685 20:54:13 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.685 20:54:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.685 20:54:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.685 ************************************ 00:03:46.685 START TEST no_shrink_alloc 00:03:46.685 ************************************ 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.685 20:54:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:50.898 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:50.898 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:50.898 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:50.898 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:50.898 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:50.898 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:50.898 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:50.898 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:50.898 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:50.898 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:50.898 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:50.898 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:50.898 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:50.898 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:50.898 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:50.898 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:50.898 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:50.898 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:50.898 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:50.898 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.898 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.898 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:50.898 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:50.898 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:50.898 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.898 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.898 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.898 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:50.898 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:50.898 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.898 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.898 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.898 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.898 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108672932 kB' 'MemAvailable: 112406332 kB' 'Buffers: 4132 kB' 'Cached: 10616052 kB' 'SwapCached: 0 kB' 'Active: 7579576 kB' 'Inactive: 3701232 kB' 'Active(anon): 7088144 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663364 kB' 'Mapped: 164424 kB' 'Shmem: 6427520 kB' 'KReclaimable: 581680 kB' 'Slab: 1457808 kB' 'SReclaimable: 581680 kB' 'SUnreclaim: 876128 kB' 'KernelStack: 27856 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8689372 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237900 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.899 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108674448 kB' 'MemAvailable: 112407848 kB' 'Buffers: 4132 kB' 'Cached: 10616056 kB' 'SwapCached: 0 kB' 'Active: 7579032 kB' 'Inactive: 3701232 kB' 'Active(anon): 7087600 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663336 kB' 'Mapped: 164332 kB' 'Shmem: 6427524 kB' 'KReclaimable: 581680 kB' 'Slab: 1457776 kB' 'SReclaimable: 581680 kB' 'SUnreclaim: 876096 kB' 'KernelStack: 27808 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8692488 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237868 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.900 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.901 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108675812 kB' 'MemAvailable: 112409212 kB' 'Buffers: 4132 kB' 'Cached: 10616072 kB' 'SwapCached: 0 kB' 'Active: 7579144 kB' 'Inactive: 3701232 kB' 'Active(anon): 7087712 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663400 kB' 'Mapped: 164336 kB' 'Shmem: 6427540 kB' 'KReclaimable: 581680 kB' 'Slab: 1457736 kB' 'SReclaimable: 581680 kB' 'SUnreclaim: 876056 kB' 'KernelStack: 27824 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8692508 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237900 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.902 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.904 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:50.905 nr_hugepages=1024 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.905 resv_hugepages=0 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.905 surplus_hugepages=0 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.905 anon_hugepages=0 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108677364 kB' 'MemAvailable: 112410764 kB' 'Buffers: 4132 kB' 'Cached: 10616096 kB' 'SwapCached: 0 kB' 'Active: 7579244 kB' 'Inactive: 3701232 kB' 'Active(anon): 7087812 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663500 kB' 'Mapped: 164336 kB' 'Shmem: 6427564 kB' 'KReclaimable: 581680 kB' 'Slab: 1457736 kB' 'SReclaimable: 581680 kB' 'SUnreclaim: 876056 kB' 'KernelStack: 27776 kB' 'PageTables: 8620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8690800 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237884 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.905 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:50.906 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60055840 kB' 'MemUsed: 5603168 kB' 'SwapCached: 0 kB' 'Active: 1477564 kB' 'Inactive: 288448 kB' 'Active(anon): 1319816 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1608356 kB' 'Mapped: 36348 kB' 'AnonPages: 160876 kB' 'Shmem: 1162160 kB' 'KernelStack: 13320 kB' 'PageTables: 3368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 325404 kB' 'Slab: 744912 kB' 'SReclaimable: 325404 kB' 'SUnreclaim: 419508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.907 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:50.908 node0=1024 expecting 1024 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.908 20:54:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:54.207 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:54.207 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:54.207 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:54.207 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:54.207 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:54.207 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:54.207 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:54.207 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:54.207 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:54.207 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:54.207 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:54.207 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:54.207 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:54.207 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:54.207 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:54.207 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:54.207 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:54.207 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108715460 kB' 'MemAvailable: 112448860 kB' 'Buffers: 4132 kB' 'Cached: 10616204 kB' 'SwapCached: 0 kB' 'Active: 7581216 kB' 'Inactive: 3701232 kB' 'Active(anon): 7089784 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664908 kB' 'Mapped: 164500 kB' 'Shmem: 6427672 kB' 'KReclaimable: 581680 kB' 'Slab: 1457940 kB' 'SReclaimable: 581680 kB' 'SUnreclaim: 876260 kB' 'KernelStack: 28160 kB' 'PageTables: 9144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8693272 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237932 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108717812 kB' 'MemAvailable: 112451212 kB' 'Buffers: 4132 kB' 'Cached: 10616208 kB' 'SwapCached: 0 kB' 'Active: 7581744 kB' 'Inactive: 3701232 kB' 'Active(anon): 7090312 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 665452 kB' 'Mapped: 164496 kB' 'Shmem: 6427676 kB' 'KReclaimable: 581680 kB' 'Slab: 1458036 kB' 'SReclaimable: 581680 kB' 'SUnreclaim: 876356 kB' 'KernelStack: 28112 kB' 'PageTables: 9396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8693288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238012 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108719628 kB' 'MemAvailable: 112453028 kB' 'Buffers: 4132 kB' 'Cached: 10616228 kB' 'SwapCached: 0 kB' 'Active: 7580708 kB' 'Inactive: 3701232 kB' 'Active(anon): 7089276 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664824 kB' 'Mapped: 164360 kB' 'Shmem: 6427696 kB' 'KReclaimable: 581680 kB' 'Slab: 1457924 kB' 'SReclaimable: 581680 kB' 'SUnreclaim: 876244 kB' 'KernelStack: 28064 kB' 'PageTables: 9388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8693312 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238044 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:54.479 nr_hugepages=1024 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.479 resv_hugepages=0 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.479 surplus_hugepages=0 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.479 anon_hugepages=0 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108720244 kB' 'MemAvailable: 112453644 kB' 'Buffers: 4132 kB' 'Cached: 10616248 kB' 'SwapCached: 0 kB' 'Active: 7580424 kB' 'Inactive: 3701232 kB' 'Active(anon): 7088992 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664492 kB' 'Mapped: 164360 kB' 'Shmem: 6427716 kB' 'KReclaimable: 581680 kB' 'Slab: 1457924 kB' 'SReclaimable: 581680 kB' 'SUnreclaim: 876244 kB' 'KernelStack: 27984 kB' 'PageTables: 9304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8693332 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238060 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4124020 kB' 'DirectMap2M: 57421824 kB' 'DirectMap1G: 74448896 kB' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.479 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:54.480 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60077972 kB' 'MemUsed: 5581036 kB' 'SwapCached: 0 kB' 'Active: 1477216 kB' 'Inactive: 288448 kB' 'Active(anon): 1319468 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1608480 kB' 'Mapped: 35844 kB' 'AnonPages: 160380 kB' 'Shmem: 1162284 kB' 'KernelStack: 13400 kB' 'PageTables: 3636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 325404 kB' 'Slab: 745272 kB' 'SReclaimable: 325404 kB' 'SUnreclaim: 419868 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:54.482 node0=1024 expecting 1024 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:54.482 00:03:54.482 real 0m7.911s 00:03:54.482 user 0m3.138s 00:03:54.482 sys 0m4.896s 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.482 20:54:21 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:54.482 ************************************ 00:03:54.482 END TEST no_shrink_alloc 00:03:54.482 ************************************ 00:03:54.482 20:54:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:54.482 20:54:21 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:54.482 20:54:21 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:54.482 20:54:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:54.482 20:54:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.482 20:54:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:54.482 20:54:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.482 20:54:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:54.482 20:54:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:54.482 20:54:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.482 20:54:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:54.482 20:54:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.482 20:54:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:54.482 20:54:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:54.482 20:54:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:54.482 00:03:54.482 real 0m29.029s 00:03:54.482 user 0m11.391s 00:03:54.482 sys 0m18.066s 00:03:54.482 20:54:21 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.482 20:54:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.482 ************************************ 00:03:54.482 END TEST hugepages 00:03:54.482 ************************************ 00:03:54.743 20:54:21 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:54.743 20:54:21 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:54.743 20:54:21 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.743 20:54:21 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.743 20:54:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:54.743 ************************************ 00:03:54.743 START TEST driver 00:03:54.743 ************************************ 00:03:54.743 20:54:21 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:54.743 * Looking for test storage... 00:03:54.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:54.743 20:54:21 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:54.743 20:54:21 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.743 20:54:21 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:00.161 20:54:26 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:00.161 20:54:26 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.161 20:54:26 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.161 20:54:26 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:00.161 ************************************ 00:04:00.161 START TEST guess_driver 00:04:00.161 ************************************ 00:04:00.161 20:54:26 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:00.161 20:54:26 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:00.161 20:54:26 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:00.161 20:54:26 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:00.161 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:00.161 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:00.161 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:00.161 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:00.161 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:00.161 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:00.161 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:00.161 Looking for driver=vfio-pci 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.161 20:54:27 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:03.464 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.464 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.464 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.464 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.464 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.464 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.464 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.464 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.464 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.724 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:03.725 20:54:30 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:09.010 00:04:09.010 real 0m9.079s 00:04:09.010 user 0m2.991s 00:04:09.010 sys 0m5.334s 00:04:09.010 20:54:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.010 20:54:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:09.010 ************************************ 00:04:09.010 END TEST guess_driver 00:04:09.010 ************************************ 00:04:09.010 20:54:36 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:09.010 00:04:09.010 real 0m14.318s 00:04:09.010 user 0m4.550s 00:04:09.010 sys 0m8.254s 00:04:09.010 20:54:36 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.010 20:54:36 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:09.010 ************************************ 00:04:09.010 END TEST driver 00:04:09.010 ************************************ 00:04:09.010 20:54:36 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:09.010 20:54:36 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:09.010 20:54:36 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.010 20:54:36 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.010 20:54:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:09.010 ************************************ 00:04:09.010 START TEST devices 00:04:09.010 ************************************ 00:04:09.010 20:54:36 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:09.010 * Looking for test storage... 00:04:09.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:09.269 20:54:36 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:09.269 20:54:36 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:09.269 20:54:36 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.269 20:54:36 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.472 20:54:40 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:13.472 20:54:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:13.472 20:54:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:13.472 20:54:40 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:13.472 20:54:40 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:13.472 20:54:40 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:13.472 20:54:40 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:13.472 20:54:40 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:13.472 20:54:40 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:13.472 20:54:40 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:13.472 20:54:40 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:13.472 20:54:40 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:13.472 20:54:40 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:13.472 20:54:40 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:13.472 20:54:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:13.472 20:54:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:13.472 20:54:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:13.472 20:54:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:13.472 20:54:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:13.472 20:54:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:13.472 20:54:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:13.472 20:54:40 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:13.472 No valid GPT data, bailing 00:04:13.472 20:54:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:13.472 20:54:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:13.472 20:54:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:13.472 20:54:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:13.472 20:54:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:13.472 20:54:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:13.472 20:54:40 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:13.472 20:54:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:13.472 20:54:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:13.472 20:54:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:13.472 20:54:40 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:13.472 20:54:40 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:13.472 20:54:40 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:13.472 20:54:40 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.472 20:54:40 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.472 20:54:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:13.472 ************************************ 00:04:13.472 START TEST nvme_mount 00:04:13.472 ************************************ 00:04:13.472 20:54:40 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:13.472 20:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:13.472 20:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:13.472 20:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.472 20:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:13.472 20:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:13.472 20:54:40 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:13.472 20:54:40 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:13.472 20:54:40 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:13.472 20:54:40 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:13.472 20:54:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:13.472 20:54:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:13.472 20:54:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:13.472 20:54:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.472 20:54:40 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.472 20:54:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:13.472 20:54:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.472 20:54:40 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:13.472 20:54:40 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:13.472 20:54:40 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:14.412 Creating new GPT entries in memory. 00:04:14.412 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:14.412 other utilities. 00:04:14.412 20:54:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:14.412 20:54:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.412 20:54:41 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:14.412 20:54:41 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:14.412 20:54:41 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:15.794 Creating new GPT entries in memory. 00:04:15.794 The operation has completed successfully. 00:04:15.794 20:54:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:15.794 20:54:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1712665 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.795 20:54:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.092 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:19.093 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.093 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:19.093 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.093 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:19.093 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.093 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.093 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:19.093 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:19.093 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:19.093 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:19.093 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:19.353 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:19.353 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:19.353 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:19.353 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:19.353 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:19.353 20:54:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:19.353 20:54:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.353 20:54:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:19.353 20:54:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:19.353 20:54:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.353 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.353 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:19.353 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:19.353 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.353 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.353 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:19.353 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:19.353 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:19.353 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:19.353 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.353 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:19.353 20:54:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:19.353 20:54:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.353 20:54:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.657 20:54:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:26.867 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:26.867 00:04:26.867 real 0m13.087s 00:04:26.867 user 0m3.831s 00:04:26.867 sys 0m6.963s 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.867 20:54:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:26.867 ************************************ 00:04:26.867 END TEST nvme_mount 00:04:26.867 ************************************ 00:04:26.867 20:54:53 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:26.867 20:54:53 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:26.867 20:54:53 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.867 20:54:53 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.867 20:54:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:26.867 ************************************ 00:04:26.867 START TEST dm_mount 00:04:26.867 ************************************ 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:26.867 20:54:53 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:27.809 Creating new GPT entries in memory. 00:04:27.809 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:27.809 other utilities. 00:04:27.809 20:54:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:27.809 20:54:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.809 20:54:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:27.809 20:54:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:27.809 20:54:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:28.751 Creating new GPT entries in memory. 00:04:28.751 The operation has completed successfully. 00:04:28.751 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:28.751 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.751 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:28.751 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:28.751 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:29.693 The operation has completed successfully. 00:04:29.693 20:54:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:29.693 20:54:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.693 20:54:56 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1717939 00:04:29.693 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:29.693 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.693 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:29.693 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:29.693 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:29.693 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:29.693 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:29.693 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:29.693 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-1 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.694 20:54:56 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.927 20:55:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.232 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.233 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.493 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.494 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:37.494 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:37.494 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:37.494 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:37.494 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:37.494 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:37.494 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.494 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:37.494 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:37.494 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:37.494 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:37.494 00:04:37.494 real 0m10.844s 00:04:37.494 user 0m2.864s 00:04:37.494 sys 0m5.039s 00:04:37.494 20:55:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.494 20:55:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:37.494 ************************************ 00:04:37.494 END TEST dm_mount 00:04:37.494 ************************************ 00:04:37.494 20:55:04 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:37.494 20:55:04 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:37.494 20:55:04 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:37.494 20:55:04 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.494 20:55:04 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.494 20:55:04 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:37.494 20:55:04 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.494 20:55:04 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:37.754 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:37.754 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:37.754 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:37.754 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:37.754 20:55:04 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:37.754 20:55:04 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:37.754 20:55:04 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:37.754 20:55:04 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.754 20:55:04 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:37.754 20:55:04 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.754 20:55:04 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:37.754 00:04:37.754 real 0m28.749s 00:04:37.754 user 0m8.369s 00:04:37.754 sys 0m15.037s 00:04:37.754 20:55:04 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.754 20:55:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:37.754 ************************************ 00:04:37.754 END TEST devices 00:04:37.754 ************************************ 00:04:37.754 20:55:04 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:37.754 00:04:37.754 real 1m38.771s 00:04:37.754 user 0m33.038s 00:04:37.754 sys 0m56.985s 00:04:37.754 20:55:04 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.754 20:55:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:37.754 ************************************ 00:04:37.754 END TEST setup.sh 00:04:37.754 ************************************ 00:04:37.754 20:55:05 -- common/autotest_common.sh@1142 -- # return 0 00:04:37.754 20:55:05 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:41.052 Hugepages 00:04:41.052 node hugesize free / total 00:04:41.052 node0 1048576kB 0 / 0 00:04:41.052 node0 2048kB 2048 / 2048 00:04:41.052 node1 1048576kB 0 / 0 00:04:41.052 node1 2048kB 0 / 0 00:04:41.052 00:04:41.052 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:41.052 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:41.052 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:41.052 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:41.052 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:41.052 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:41.052 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:41.052 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:41.052 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:41.052 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:41.052 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:41.052 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:41.052 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:41.052 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:41.052 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:41.052 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:41.052 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:41.052 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:41.052 20:55:08 -- spdk/autotest.sh@130 -- # uname -s 00:04:41.052 20:55:08 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:41.052 20:55:08 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:41.052 20:55:08 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:45.254 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:45.254 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:45.254 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:45.254 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:45.254 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:45.254 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:45.254 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:45.254 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:45.254 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:45.254 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:45.254 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:45.254 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:45.254 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:45.254 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:45.254 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:45.254 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:46.635 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:46.896 20:55:13 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:47.837 20:55:14 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:47.837 20:55:14 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:47.837 20:55:14 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:47.837 20:55:14 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:47.837 20:55:14 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:47.837 20:55:14 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:47.837 20:55:14 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:47.837 20:55:14 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:47.837 20:55:14 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:47.837 20:55:15 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:47.837 20:55:15 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:04:47.837 20:55:15 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:52.118 Waiting for block devices as requested 00:04:52.118 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:52.118 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:52.118 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:52.118 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:52.118 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:52.118 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:52.118 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:52.378 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:52.378 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:52.378 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:52.638 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:52.638 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:52.638 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:52.638 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:52.898 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:52.898 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:52.898 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:52.898 20:55:20 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:52.898 20:55:20 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:52.898 20:55:20 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:52.898 20:55:20 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:04:52.898 20:55:20 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:52.898 20:55:20 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:52.898 20:55:20 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:52.898 20:55:20 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:52.898 20:55:20 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:52.898 20:55:20 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:53.158 20:55:20 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:53.158 20:55:20 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:53.158 20:55:20 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:53.158 20:55:20 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:04:53.158 20:55:20 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:53.158 20:55:20 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:53.158 20:55:20 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:53.158 20:55:20 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:53.158 20:55:20 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:53.158 20:55:20 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:53.158 20:55:20 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:53.158 20:55:20 -- common/autotest_common.sh@1557 -- # continue 00:04:53.158 20:55:20 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:53.158 20:55:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:53.158 20:55:20 -- common/autotest_common.sh@10 -- # set +x 00:04:53.158 20:55:20 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:53.158 20:55:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.158 20:55:20 -- common/autotest_common.sh@10 -- # set +x 00:04:53.158 20:55:20 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.363 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:57.363 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:57.363 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:57.363 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:57.363 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:57.363 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:57.363 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:57.363 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:57.363 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:57.363 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:57.363 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:57.363 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:57.363 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:57.363 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:57.363 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:57.363 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:57.363 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:57.363 20:55:24 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:57.363 20:55:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:57.363 20:55:24 -- common/autotest_common.sh@10 -- # set +x 00:04:57.363 20:55:24 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:57.363 20:55:24 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:57.363 20:55:24 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:57.363 20:55:24 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:57.363 20:55:24 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:57.363 20:55:24 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:57.363 20:55:24 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:57.363 20:55:24 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:57.363 20:55:24 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:57.363 20:55:24 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:57.363 20:55:24 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:57.363 20:55:24 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:57.363 20:55:24 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:04:57.363 20:55:24 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:57.363 20:55:24 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:57.363 20:55:24 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:04:57.363 20:55:24 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:57.363 20:55:24 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:57.363 20:55:24 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:57.363 20:55:24 -- common/autotest_common.sh@1593 -- # return 0 00:04:57.363 20:55:24 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:57.363 20:55:24 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:57.363 20:55:24 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:57.363 20:55:24 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:57.363 20:55:24 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:57.363 20:55:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.363 20:55:24 -- common/autotest_common.sh@10 -- # set +x 00:04:57.363 20:55:24 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:57.363 20:55:24 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:57.363 20:55:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.363 20:55:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.363 20:55:24 -- common/autotest_common.sh@10 -- # set +x 00:04:57.363 ************************************ 00:04:57.363 START TEST env 00:04:57.363 ************************************ 00:04:57.363 20:55:24 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:57.363 * Looking for test storage... 00:04:57.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:57.363 20:55:24 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:57.363 20:55:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.363 20:55:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.363 20:55:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:57.363 ************************************ 00:04:57.363 START TEST env_memory 00:04:57.363 ************************************ 00:04:57.363 20:55:24 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:57.363 00:04:57.363 00:04:57.363 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.363 http://cunit.sourceforge.net/ 00:04:57.363 00:04:57.363 00:04:57.363 Suite: memory 00:04:57.363 Test: alloc and free memory map ...[2024-07-15 20:55:24.552926] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:57.363 passed 00:04:57.363 Test: mem map translation ...[2024-07-15 20:55:24.578667] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:57.363 [2024-07-15 20:55:24.578699] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:57.363 [2024-07-15 20:55:24.578744] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:57.363 [2024-07-15 20:55:24.578751] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:57.363 passed 00:04:57.363 Test: mem map registration ...[2024-07-15 20:55:24.634080] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:57.363 [2024-07-15 20:55:24.634108] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:57.363 passed 00:04:57.626 Test: mem map adjacent registrations ...passed 00:04:57.626 00:04:57.626 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.626 suites 1 1 n/a 0 0 00:04:57.626 tests 4 4 4 0 0 00:04:57.626 asserts 152 152 152 0 n/a 00:04:57.626 00:04:57.626 Elapsed time = 0.194 seconds 00:04:57.626 00:04:57.626 real 0m0.209s 00:04:57.626 user 0m0.200s 00:04:57.626 sys 0m0.008s 00:04:57.626 20:55:24 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.626 20:55:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:57.626 ************************************ 00:04:57.626 END TEST env_memory 00:04:57.626 ************************************ 00:04:57.626 20:55:24 env -- common/autotest_common.sh@1142 -- # return 0 00:04:57.626 20:55:24 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:57.626 20:55:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.626 20:55:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.626 20:55:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:57.626 ************************************ 00:04:57.626 START TEST env_vtophys 00:04:57.626 ************************************ 00:04:57.626 20:55:24 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:57.626 EAL: lib.eal log level changed from notice to debug 00:04:57.626 EAL: Detected lcore 0 as core 0 on socket 0 00:04:57.626 EAL: Detected lcore 1 as core 1 on socket 0 00:04:57.626 EAL: Detected lcore 2 as core 2 on socket 0 00:04:57.626 EAL: Detected lcore 3 as core 3 on socket 0 00:04:57.626 EAL: Detected lcore 4 as core 4 on socket 0 00:04:57.626 EAL: Detected lcore 5 as core 5 on socket 0 00:04:57.626 EAL: Detected lcore 6 as core 6 on socket 0 00:04:57.626 EAL: Detected lcore 7 as core 7 on socket 0 00:04:57.626 EAL: Detected lcore 8 as core 8 on socket 0 00:04:57.626 EAL: Detected lcore 9 as core 9 on socket 0 00:04:57.626 EAL: Detected lcore 10 as core 10 on socket 0 00:04:57.626 EAL: Detected lcore 11 as core 11 on socket 0 00:04:57.626 EAL: Detected lcore 12 as core 12 on socket 0 00:04:57.626 EAL: Detected lcore 13 as core 13 on socket 0 00:04:57.626 EAL: Detected lcore 14 as core 14 on socket 0 00:04:57.626 EAL: Detected lcore 15 as core 15 on socket 0 00:04:57.626 EAL: Detected lcore 16 as core 16 on socket 0 00:04:57.626 EAL: Detected lcore 17 as core 17 on socket 0 00:04:57.626 EAL: Detected lcore 18 as core 18 on socket 0 00:04:57.626 EAL: Detected lcore 19 as core 19 on socket 0 00:04:57.626 EAL: Detected lcore 20 as core 20 on socket 0 00:04:57.626 EAL: Detected lcore 21 as core 21 on socket 0 00:04:57.626 EAL: Detected lcore 22 as core 22 on socket 0 00:04:57.626 EAL: Detected lcore 23 as core 23 on socket 0 00:04:57.626 EAL: Detected lcore 24 as core 24 on socket 0 00:04:57.626 EAL: Detected lcore 25 as core 25 on socket 0 00:04:57.626 EAL: Detected lcore 26 as core 26 on socket 0 00:04:57.626 EAL: Detected lcore 27 as core 27 on socket 0 00:04:57.626 EAL: Detected lcore 28 as core 28 on socket 0 00:04:57.626 EAL: Detected lcore 29 as core 29 on socket 0 00:04:57.626 EAL: Detected lcore 30 as core 30 on socket 0 00:04:57.626 EAL: Detected lcore 31 as core 31 on socket 0 00:04:57.626 EAL: Detected lcore 32 as core 32 on socket 0 00:04:57.626 EAL: Detected lcore 33 as core 33 on socket 0 00:04:57.626 EAL: Detected lcore 34 as core 34 on socket 0 00:04:57.626 EAL: Detected lcore 35 as core 35 on socket 0 00:04:57.626 EAL: Detected lcore 36 as core 0 on socket 1 00:04:57.626 EAL: Detected lcore 37 as core 1 on socket 1 00:04:57.626 EAL: Detected lcore 38 as core 2 on socket 1 00:04:57.626 EAL: Detected lcore 39 as core 3 on socket 1 00:04:57.626 EAL: Detected lcore 40 as core 4 on socket 1 00:04:57.626 EAL: Detected lcore 41 as core 5 on socket 1 00:04:57.626 EAL: Detected lcore 42 as core 6 on socket 1 00:04:57.626 EAL: Detected lcore 43 as core 7 on socket 1 00:04:57.626 EAL: Detected lcore 44 as core 8 on socket 1 00:04:57.626 EAL: Detected lcore 45 as core 9 on socket 1 00:04:57.626 EAL: Detected lcore 46 as core 10 on socket 1 00:04:57.626 EAL: Detected lcore 47 as core 11 on socket 1 00:04:57.626 EAL: Detected lcore 48 as core 12 on socket 1 00:04:57.626 EAL: Detected lcore 49 as core 13 on socket 1 00:04:57.626 EAL: Detected lcore 50 as core 14 on socket 1 00:04:57.626 EAL: Detected lcore 51 as core 15 on socket 1 00:04:57.626 EAL: Detected lcore 52 as core 16 on socket 1 00:04:57.626 EAL: Detected lcore 53 as core 17 on socket 1 00:04:57.626 EAL: Detected lcore 54 as core 18 on socket 1 00:04:57.626 EAL: Detected lcore 55 as core 19 on socket 1 00:04:57.626 EAL: Detected lcore 56 as core 20 on socket 1 00:04:57.626 EAL: Detected lcore 57 as core 21 on socket 1 00:04:57.626 EAL: Detected lcore 58 as core 22 on socket 1 00:04:57.626 EAL: Detected lcore 59 as core 23 on socket 1 00:04:57.626 EAL: Detected lcore 60 as core 24 on socket 1 00:04:57.626 EAL: Detected lcore 61 as core 25 on socket 1 00:04:57.626 EAL: Detected lcore 62 as core 26 on socket 1 00:04:57.626 EAL: Detected lcore 63 as core 27 on socket 1 00:04:57.626 EAL: Detected lcore 64 as core 28 on socket 1 00:04:57.626 EAL: Detected lcore 65 as core 29 on socket 1 00:04:57.626 EAL: Detected lcore 66 as core 30 on socket 1 00:04:57.626 EAL: Detected lcore 67 as core 31 on socket 1 00:04:57.626 EAL: Detected lcore 68 as core 32 on socket 1 00:04:57.626 EAL: Detected lcore 69 as core 33 on socket 1 00:04:57.626 EAL: Detected lcore 70 as core 34 on socket 1 00:04:57.626 EAL: Detected lcore 71 as core 35 on socket 1 00:04:57.626 EAL: Detected lcore 72 as core 0 on socket 0 00:04:57.626 EAL: Detected lcore 73 as core 1 on socket 0 00:04:57.626 EAL: Detected lcore 74 as core 2 on socket 0 00:04:57.626 EAL: Detected lcore 75 as core 3 on socket 0 00:04:57.626 EAL: Detected lcore 76 as core 4 on socket 0 00:04:57.626 EAL: Detected lcore 77 as core 5 on socket 0 00:04:57.626 EAL: Detected lcore 78 as core 6 on socket 0 00:04:57.626 EAL: Detected lcore 79 as core 7 on socket 0 00:04:57.626 EAL: Detected lcore 80 as core 8 on socket 0 00:04:57.626 EAL: Detected lcore 81 as core 9 on socket 0 00:04:57.626 EAL: Detected lcore 82 as core 10 on socket 0 00:04:57.626 EAL: Detected lcore 83 as core 11 on socket 0 00:04:57.626 EAL: Detected lcore 84 as core 12 on socket 0 00:04:57.626 EAL: Detected lcore 85 as core 13 on socket 0 00:04:57.626 EAL: Detected lcore 86 as core 14 on socket 0 00:04:57.626 EAL: Detected lcore 87 as core 15 on socket 0 00:04:57.626 EAL: Detected lcore 88 as core 16 on socket 0 00:04:57.626 EAL: Detected lcore 89 as core 17 on socket 0 00:04:57.626 EAL: Detected lcore 90 as core 18 on socket 0 00:04:57.626 EAL: Detected lcore 91 as core 19 on socket 0 00:04:57.626 EAL: Detected lcore 92 as core 20 on socket 0 00:04:57.626 EAL: Detected lcore 93 as core 21 on socket 0 00:04:57.626 EAL: Detected lcore 94 as core 22 on socket 0 00:04:57.626 EAL: Detected lcore 95 as core 23 on socket 0 00:04:57.626 EAL: Detected lcore 96 as core 24 on socket 0 00:04:57.626 EAL: Detected lcore 97 as core 25 on socket 0 00:04:57.626 EAL: Detected lcore 98 as core 26 on socket 0 00:04:57.626 EAL: Detected lcore 99 as core 27 on socket 0 00:04:57.626 EAL: Detected lcore 100 as core 28 on socket 0 00:04:57.626 EAL: Detected lcore 101 as core 29 on socket 0 00:04:57.626 EAL: Detected lcore 102 as core 30 on socket 0 00:04:57.626 EAL: Detected lcore 103 as core 31 on socket 0 00:04:57.626 EAL: Detected lcore 104 as core 32 on socket 0 00:04:57.626 EAL: Detected lcore 105 as core 33 on socket 0 00:04:57.626 EAL: Detected lcore 106 as core 34 on socket 0 00:04:57.626 EAL: Detected lcore 107 as core 35 on socket 0 00:04:57.626 EAL: Detected lcore 108 as core 0 on socket 1 00:04:57.626 EAL: Detected lcore 109 as core 1 on socket 1 00:04:57.626 EAL: Detected lcore 110 as core 2 on socket 1 00:04:57.626 EAL: Detected lcore 111 as core 3 on socket 1 00:04:57.626 EAL: Detected lcore 112 as core 4 on socket 1 00:04:57.626 EAL: Detected lcore 113 as core 5 on socket 1 00:04:57.626 EAL: Detected lcore 114 as core 6 on socket 1 00:04:57.626 EAL: Detected lcore 115 as core 7 on socket 1 00:04:57.626 EAL: Detected lcore 116 as core 8 on socket 1 00:04:57.626 EAL: Detected lcore 117 as core 9 on socket 1 00:04:57.626 EAL: Detected lcore 118 as core 10 on socket 1 00:04:57.626 EAL: Detected lcore 119 as core 11 on socket 1 00:04:57.626 EAL: Detected lcore 120 as core 12 on socket 1 00:04:57.626 EAL: Detected lcore 121 as core 13 on socket 1 00:04:57.626 EAL: Detected lcore 122 as core 14 on socket 1 00:04:57.626 EAL: Detected lcore 123 as core 15 on socket 1 00:04:57.626 EAL: Detected lcore 124 as core 16 on socket 1 00:04:57.626 EAL: Detected lcore 125 as core 17 on socket 1 00:04:57.626 EAL: Detected lcore 126 as core 18 on socket 1 00:04:57.626 EAL: Detected lcore 127 as core 19 on socket 1 00:04:57.626 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:57.626 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:57.626 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:57.626 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:57.626 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:57.626 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:57.626 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:57.626 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:57.626 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:57.626 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:57.626 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:57.626 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:57.626 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:57.626 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:57.626 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:57.627 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:57.627 EAL: Maximum logical cores by configuration: 128 00:04:57.627 EAL: Detected CPU lcores: 128 00:04:57.627 EAL: Detected NUMA nodes: 2 00:04:57.627 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:57.627 EAL: Detected shared linkage of DPDK 00:04:57.627 EAL: No shared files mode enabled, IPC will be disabled 00:04:57.627 EAL: Bus pci wants IOVA as 'DC' 00:04:57.627 EAL: Buses did not request a specific IOVA mode. 00:04:57.627 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:57.627 EAL: Selected IOVA mode 'VA' 00:04:57.627 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.627 EAL: Probing VFIO support... 00:04:57.627 EAL: IOMMU type 1 (Type 1) is supported 00:04:57.627 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:57.627 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:57.627 EAL: VFIO support initialized 00:04:57.627 EAL: Ask a virtual area of 0x2e000 bytes 00:04:57.627 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:57.627 EAL: Setting up physically contiguous memory... 00:04:57.627 EAL: Setting maximum number of open files to 524288 00:04:57.627 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:57.627 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:57.627 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:57.627 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.627 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:57.627 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:57.627 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.627 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:57.627 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:57.627 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.627 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:57.627 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:57.627 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.627 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:57.627 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:57.627 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.627 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:57.627 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:57.627 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.627 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:57.627 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:57.627 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.627 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:57.627 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:57.627 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.627 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:57.627 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:57.627 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:57.627 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.627 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:57.627 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:57.627 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.627 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:57.627 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:57.627 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.627 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:57.627 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:57.627 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.627 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:57.627 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:57.627 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.627 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:57.627 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:57.627 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.627 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:57.627 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:57.627 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.627 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:57.627 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:57.627 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.627 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:57.627 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:57.627 EAL: Hugepages will be freed exactly as allocated. 00:04:57.627 EAL: No shared files mode enabled, IPC is disabled 00:04:57.627 EAL: No shared files mode enabled, IPC is disabled 00:04:57.627 EAL: TSC frequency is ~2400000 KHz 00:04:57.627 EAL: Main lcore 0 is ready (tid=7ff415869a00;cpuset=[0]) 00:04:57.627 EAL: Trying to obtain current memory policy. 00:04:57.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.627 EAL: Restoring previous memory policy: 0 00:04:57.627 EAL: request: mp_malloc_sync 00:04:57.627 EAL: No shared files mode enabled, IPC is disabled 00:04:57.627 EAL: Heap on socket 0 was expanded by 2MB 00:04:57.627 EAL: No shared files mode enabled, IPC is disabled 00:04:57.627 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:57.627 EAL: Mem event callback 'spdk:(nil)' registered 00:04:57.627 00:04:57.627 00:04:57.627 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.627 http://cunit.sourceforge.net/ 00:04:57.627 00:04:57.627 00:04:57.627 Suite: components_suite 00:04:57.627 Test: vtophys_malloc_test ...passed 00:04:57.627 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:57.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.627 EAL: Restoring previous memory policy: 4 00:04:57.627 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.627 EAL: request: mp_malloc_sync 00:04:57.627 EAL: No shared files mode enabled, IPC is disabled 00:04:57.627 EAL: Heap on socket 0 was expanded by 4MB 00:04:57.627 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.627 EAL: request: mp_malloc_sync 00:04:57.627 EAL: No shared files mode enabled, IPC is disabled 00:04:57.627 EAL: Heap on socket 0 was shrunk by 4MB 00:04:57.627 EAL: Trying to obtain current memory policy. 00:04:57.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.627 EAL: Restoring previous memory policy: 4 00:04:57.627 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.627 EAL: request: mp_malloc_sync 00:04:57.627 EAL: No shared files mode enabled, IPC is disabled 00:04:57.627 EAL: Heap on socket 0 was expanded by 6MB 00:04:57.627 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.627 EAL: request: mp_malloc_sync 00:04:57.627 EAL: No shared files mode enabled, IPC is disabled 00:04:57.627 EAL: Heap on socket 0 was shrunk by 6MB 00:04:57.627 EAL: Trying to obtain current memory policy. 00:04:57.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.627 EAL: Restoring previous memory policy: 4 00:04:57.627 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.627 EAL: request: mp_malloc_sync 00:04:57.627 EAL: No shared files mode enabled, IPC is disabled 00:04:57.627 EAL: Heap on socket 0 was expanded by 10MB 00:04:57.627 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.627 EAL: request: mp_malloc_sync 00:04:57.627 EAL: No shared files mode enabled, IPC is disabled 00:04:57.627 EAL: Heap on socket 0 was shrunk by 10MB 00:04:57.627 EAL: Trying to obtain current memory policy. 00:04:57.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.627 EAL: Restoring previous memory policy: 4 00:04:57.627 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.627 EAL: request: mp_malloc_sync 00:04:57.627 EAL: No shared files mode enabled, IPC is disabled 00:04:57.627 EAL: Heap on socket 0 was expanded by 18MB 00:04:57.627 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.627 EAL: request: mp_malloc_sync 00:04:57.627 EAL: No shared files mode enabled, IPC is disabled 00:04:57.627 EAL: Heap on socket 0 was shrunk by 18MB 00:04:57.627 EAL: Trying to obtain current memory policy. 00:04:57.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.627 EAL: Restoring previous memory policy: 4 00:04:57.627 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.627 EAL: request: mp_malloc_sync 00:04:57.627 EAL: No shared files mode enabled, IPC is disabled 00:04:57.627 EAL: Heap on socket 0 was expanded by 34MB 00:04:57.627 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.627 EAL: request: mp_malloc_sync 00:04:57.627 EAL: No shared files mode enabled, IPC is disabled 00:04:57.627 EAL: Heap on socket 0 was shrunk by 34MB 00:04:57.627 EAL: Trying to obtain current memory policy. 00:04:57.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.627 EAL: Restoring previous memory policy: 4 00:04:57.627 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.627 EAL: request: mp_malloc_sync 00:04:57.627 EAL: No shared files mode enabled, IPC is disabled 00:04:57.627 EAL: Heap on socket 0 was expanded by 66MB 00:04:57.627 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.888 EAL: request: mp_malloc_sync 00:04:57.888 EAL: No shared files mode enabled, IPC is disabled 00:04:57.888 EAL: Heap on socket 0 was shrunk by 66MB 00:04:57.888 EAL: Trying to obtain current memory policy. 00:04:57.888 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.888 EAL: Restoring previous memory policy: 4 00:04:57.888 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.888 EAL: request: mp_malloc_sync 00:04:57.888 EAL: No shared files mode enabled, IPC is disabled 00:04:57.888 EAL: Heap on socket 0 was expanded by 130MB 00:04:57.888 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.888 EAL: request: mp_malloc_sync 00:04:57.888 EAL: No shared files mode enabled, IPC is disabled 00:04:57.888 EAL: Heap on socket 0 was shrunk by 130MB 00:04:57.888 EAL: Trying to obtain current memory policy. 00:04:57.888 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.888 EAL: Restoring previous memory policy: 4 00:04:57.888 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.888 EAL: request: mp_malloc_sync 00:04:57.888 EAL: No shared files mode enabled, IPC is disabled 00:04:57.888 EAL: Heap on socket 0 was expanded by 258MB 00:04:57.888 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.888 EAL: request: mp_malloc_sync 00:04:57.888 EAL: No shared files mode enabled, IPC is disabled 00:04:57.888 EAL: Heap on socket 0 was shrunk by 258MB 00:04:57.888 EAL: Trying to obtain current memory policy. 00:04:57.888 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.888 EAL: Restoring previous memory policy: 4 00:04:57.888 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.888 EAL: request: mp_malloc_sync 00:04:57.888 EAL: No shared files mode enabled, IPC is disabled 00:04:57.888 EAL: Heap on socket 0 was expanded by 514MB 00:04:57.888 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.148 EAL: request: mp_malloc_sync 00:04:58.148 EAL: No shared files mode enabled, IPC is disabled 00:04:58.148 EAL: Heap on socket 0 was shrunk by 514MB 00:04:58.148 EAL: Trying to obtain current memory policy. 00:04:58.148 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.148 EAL: Restoring previous memory policy: 4 00:04:58.148 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.148 EAL: request: mp_malloc_sync 00:04:58.148 EAL: No shared files mode enabled, IPC is disabled 00:04:58.148 EAL: Heap on socket 0 was expanded by 1026MB 00:04:58.409 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.409 EAL: request: mp_malloc_sync 00:04:58.409 EAL: No shared files mode enabled, IPC is disabled 00:04:58.409 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:58.409 passed 00:04:58.409 00:04:58.409 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.409 suites 1 1 n/a 0 0 00:04:58.409 tests 2 2 2 0 0 00:04:58.409 asserts 497 497 497 0 n/a 00:04:58.409 00:04:58.409 Elapsed time = 0.657 seconds 00:04:58.409 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.409 EAL: request: mp_malloc_sync 00:04:58.409 EAL: No shared files mode enabled, IPC is disabled 00:04:58.409 EAL: Heap on socket 0 was shrunk by 2MB 00:04:58.409 EAL: No shared files mode enabled, IPC is disabled 00:04:58.409 EAL: No shared files mode enabled, IPC is disabled 00:04:58.409 EAL: No shared files mode enabled, IPC is disabled 00:04:58.409 00:04:58.409 real 0m0.789s 00:04:58.409 user 0m0.410s 00:04:58.409 sys 0m0.347s 00:04:58.409 20:55:25 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.409 20:55:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:58.409 ************************************ 00:04:58.409 END TEST env_vtophys 00:04:58.409 ************************************ 00:04:58.409 20:55:25 env -- common/autotest_common.sh@1142 -- # return 0 00:04:58.409 20:55:25 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:58.409 20:55:25 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.409 20:55:25 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.409 20:55:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.409 ************************************ 00:04:58.409 START TEST env_pci 00:04:58.409 ************************************ 00:04:58.409 20:55:25 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:58.409 00:04:58.409 00:04:58.409 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.409 http://cunit.sourceforge.net/ 00:04:58.409 00:04:58.409 00:04:58.409 Suite: pci 00:04:58.409 Test: pci_hook ...[2024-07-15 20:55:25.653821] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1730008 has claimed it 00:04:58.409 EAL: Cannot find device (10000:00:01.0) 00:04:58.409 EAL: Failed to attach device on primary process 00:04:58.409 passed 00:04:58.409 00:04:58.409 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.409 suites 1 1 n/a 0 0 00:04:58.409 tests 1 1 1 0 0 00:04:58.409 asserts 25 25 25 0 n/a 00:04:58.409 00:04:58.409 Elapsed time = 0.032 seconds 00:04:58.409 00:04:58.409 real 0m0.051s 00:04:58.409 user 0m0.016s 00:04:58.409 sys 0m0.035s 00:04:58.409 20:55:25 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.409 20:55:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:58.409 ************************************ 00:04:58.409 END TEST env_pci 00:04:58.409 ************************************ 00:04:58.669 20:55:25 env -- common/autotest_common.sh@1142 -- # return 0 00:04:58.670 20:55:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:58.670 20:55:25 env -- env/env.sh@15 -- # uname 00:04:58.670 20:55:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:58.670 20:55:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:58.670 20:55:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:58.670 20:55:25 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:58.670 20:55:25 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.670 20:55:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.670 ************************************ 00:04:58.670 START TEST env_dpdk_post_init 00:04:58.670 ************************************ 00:04:58.670 20:55:25 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:58.670 EAL: Detected CPU lcores: 128 00:04:58.670 EAL: Detected NUMA nodes: 2 00:04:58.670 EAL: Detected shared linkage of DPDK 00:04:58.670 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:58.670 EAL: Selected IOVA mode 'VA' 00:04:58.670 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.670 EAL: VFIO support initialized 00:04:58.670 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:58.670 EAL: Using IOMMU type 1 (Type 1) 00:04:58.930 EAL: Ignore mapping IO port bar(1) 00:04:58.930 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:59.191 EAL: Ignore mapping IO port bar(1) 00:04:59.191 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:59.191 EAL: Ignore mapping IO port bar(1) 00:04:59.452 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:59.452 EAL: Ignore mapping IO port bar(1) 00:04:59.712 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:59.712 EAL: Ignore mapping IO port bar(1) 00:04:59.973 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:59.973 EAL: Ignore mapping IO port bar(1) 00:04:59.973 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:00.234 EAL: Ignore mapping IO port bar(1) 00:05:00.234 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:00.495 EAL: Ignore mapping IO port bar(1) 00:05:00.495 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:00.756 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:00.756 EAL: Ignore mapping IO port bar(1) 00:05:01.017 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:01.017 EAL: Ignore mapping IO port bar(1) 00:05:01.278 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:01.278 EAL: Ignore mapping IO port bar(1) 00:05:01.538 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:01.538 EAL: Ignore mapping IO port bar(1) 00:05:01.538 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:01.800 EAL: Ignore mapping IO port bar(1) 00:05:01.800 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:02.060 EAL: Ignore mapping IO port bar(1) 00:05:02.060 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:02.320 EAL: Ignore mapping IO port bar(1) 00:05:02.320 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:02.320 EAL: Ignore mapping IO port bar(1) 00:05:02.580 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:02.580 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:02.580 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:02.580 Starting DPDK initialization... 00:05:02.580 Starting SPDK post initialization... 00:05:02.580 SPDK NVMe probe 00:05:02.580 Attaching to 0000:65:00.0 00:05:02.580 Attached to 0000:65:00.0 00:05:02.580 Cleaning up... 00:05:04.526 00:05:04.526 real 0m5.733s 00:05:04.526 user 0m0.193s 00:05:04.526 sys 0m0.085s 00:05:04.526 20:55:31 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.526 20:55:31 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:04.526 ************************************ 00:05:04.526 END TEST env_dpdk_post_init 00:05:04.526 ************************************ 00:05:04.526 20:55:31 env -- common/autotest_common.sh@1142 -- # return 0 00:05:04.526 20:55:31 env -- env/env.sh@26 -- # uname 00:05:04.526 20:55:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:04.526 20:55:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:04.526 20:55:31 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.526 20:55:31 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.526 20:55:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.526 ************************************ 00:05:04.526 START TEST env_mem_callbacks 00:05:04.526 ************************************ 00:05:04.526 20:55:31 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:04.526 EAL: Detected CPU lcores: 128 00:05:04.526 EAL: Detected NUMA nodes: 2 00:05:04.526 EAL: Detected shared linkage of DPDK 00:05:04.526 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:04.526 EAL: Selected IOVA mode 'VA' 00:05:04.526 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.526 EAL: VFIO support initialized 00:05:04.526 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:04.526 00:05:04.526 00:05:04.526 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.526 http://cunit.sourceforge.net/ 00:05:04.526 00:05:04.526 00:05:04.526 Suite: memory 00:05:04.526 Test: test ... 00:05:04.526 register 0x200000200000 2097152 00:05:04.526 malloc 3145728 00:05:04.526 register 0x200000400000 4194304 00:05:04.526 buf 0x200000500000 len 3145728 PASSED 00:05:04.526 malloc 64 00:05:04.526 buf 0x2000004fff40 len 64 PASSED 00:05:04.526 malloc 4194304 00:05:04.526 register 0x200000800000 6291456 00:05:04.526 buf 0x200000a00000 len 4194304 PASSED 00:05:04.526 free 0x200000500000 3145728 00:05:04.526 free 0x2000004fff40 64 00:05:04.526 unregister 0x200000400000 4194304 PASSED 00:05:04.526 free 0x200000a00000 4194304 00:05:04.526 unregister 0x200000800000 6291456 PASSED 00:05:04.526 malloc 8388608 00:05:04.526 register 0x200000400000 10485760 00:05:04.526 buf 0x200000600000 len 8388608 PASSED 00:05:04.526 free 0x200000600000 8388608 00:05:04.526 unregister 0x200000400000 10485760 PASSED 00:05:04.526 passed 00:05:04.526 00:05:04.526 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.526 suites 1 1 n/a 0 0 00:05:04.526 tests 1 1 1 0 0 00:05:04.526 asserts 15 15 15 0 n/a 00:05:04.526 00:05:04.526 Elapsed time = 0.008 seconds 00:05:04.526 00:05:04.526 real 0m0.067s 00:05:04.526 user 0m0.022s 00:05:04.526 sys 0m0.045s 00:05:04.526 20:55:31 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.526 20:55:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:04.526 ************************************ 00:05:04.526 END TEST env_mem_callbacks 00:05:04.526 ************************************ 00:05:04.526 20:55:31 env -- common/autotest_common.sh@1142 -- # return 0 00:05:04.526 00:05:04.526 real 0m7.338s 00:05:04.526 user 0m1.012s 00:05:04.526 sys 0m0.864s 00:05:04.526 20:55:31 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.526 20:55:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.526 ************************************ 00:05:04.526 END TEST env 00:05:04.526 ************************************ 00:05:04.526 20:55:31 -- common/autotest_common.sh@1142 -- # return 0 00:05:04.526 20:55:31 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:04.526 20:55:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.526 20:55:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.526 20:55:31 -- common/autotest_common.sh@10 -- # set +x 00:05:04.526 ************************************ 00:05:04.526 START TEST rpc 00:05:04.526 ************************************ 00:05:04.526 20:55:31 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:04.787 * Looking for test storage... 00:05:04.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:04.787 20:55:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1731461 00:05:04.787 20:55:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.787 20:55:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1731461 00:05:04.787 20:55:31 rpc -- common/autotest_common.sh@829 -- # '[' -z 1731461 ']' 00:05:04.787 20:55:31 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.787 20:55:31 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.787 20:55:31 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.787 20:55:31 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.787 20:55:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.787 20:55:31 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:04.787 [2024-07-15 20:55:31.922342] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:04.787 [2024-07-15 20:55:31.922395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731461 ] 00:05:04.787 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.787 [2024-07-15 20:55:31.989129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.787 [2024-07-15 20:55:32.056925] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:04.787 [2024-07-15 20:55:32.056961] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1731461' to capture a snapshot of events at runtime. 00:05:04.787 [2024-07-15 20:55:32.056968] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:04.787 [2024-07-15 20:55:32.056974] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:04.787 [2024-07-15 20:55:32.056980] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1731461 for offline analysis/debug. 00:05:04.787 [2024-07-15 20:55:32.056998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.729 20:55:32 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.729 20:55:32 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:05.729 20:55:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:05.729 20:55:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:05.729 20:55:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:05.729 20:55:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:05.729 20:55:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.729 20:55:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.729 20:55:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.729 ************************************ 00:05:05.729 START TEST rpc_integrity 00:05:05.729 ************************************ 00:05:05.729 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:05.729 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:05.729 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.729 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.729 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.730 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:05.730 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:05.730 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:05.730 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.730 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:05.730 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.730 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:05.730 { 00:05:05.730 "name": "Malloc0", 00:05:05.730 "aliases": [ 00:05:05.730 "eebb2cb1-ab65-4ac5-a707-fde22184a2e2" 00:05:05.730 ], 00:05:05.730 "product_name": "Malloc disk", 00:05:05.730 "block_size": 512, 00:05:05.730 "num_blocks": 16384, 00:05:05.730 "uuid": "eebb2cb1-ab65-4ac5-a707-fde22184a2e2", 00:05:05.730 "assigned_rate_limits": { 00:05:05.730 "rw_ios_per_sec": 0, 00:05:05.730 "rw_mbytes_per_sec": 0, 00:05:05.730 "r_mbytes_per_sec": 0, 00:05:05.730 "w_mbytes_per_sec": 0 00:05:05.730 }, 00:05:05.730 "claimed": false, 00:05:05.730 "zoned": false, 00:05:05.730 "supported_io_types": { 00:05:05.730 "read": true, 00:05:05.730 "write": true, 00:05:05.730 "unmap": true, 00:05:05.730 "flush": true, 00:05:05.730 "reset": true, 00:05:05.730 "nvme_admin": false, 00:05:05.730 "nvme_io": false, 00:05:05.730 "nvme_io_md": false, 00:05:05.730 "write_zeroes": true, 00:05:05.730 "zcopy": true, 00:05:05.730 "get_zone_info": false, 00:05:05.730 "zone_management": false, 00:05:05.730 "zone_append": false, 00:05:05.730 "compare": false, 00:05:05.730 "compare_and_write": false, 00:05:05.730 "abort": true, 00:05:05.730 "seek_hole": false, 00:05:05.730 "seek_data": false, 00:05:05.730 "copy": true, 00:05:05.730 "nvme_iov_md": false 00:05:05.730 }, 00:05:05.730 "memory_domains": [ 00:05:05.730 { 00:05:05.730 "dma_device_id": "system", 00:05:05.730 "dma_device_type": 1 00:05:05.730 }, 00:05:05.730 { 00:05:05.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.730 "dma_device_type": 2 00:05:05.730 } 00:05:05.730 ], 00:05:05.730 "driver_specific": {} 00:05:05.730 } 00:05:05.730 ]' 00:05:05.730 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:05.730 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:05.730 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.730 [2024-07-15 20:55:32.827351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:05.730 [2024-07-15 20:55:32.827381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:05.730 [2024-07-15 20:55:32.827393] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1997c50 00:05:05.730 [2024-07-15 20:55:32.827400] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:05.730 [2024-07-15 20:55:32.828705] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:05.730 [2024-07-15 20:55:32.828726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:05.730 Passthru0 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.730 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.730 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:05.730 { 00:05:05.730 "name": "Malloc0", 00:05:05.730 "aliases": [ 00:05:05.730 "eebb2cb1-ab65-4ac5-a707-fde22184a2e2" 00:05:05.730 ], 00:05:05.730 "product_name": "Malloc disk", 00:05:05.730 "block_size": 512, 00:05:05.730 "num_blocks": 16384, 00:05:05.730 "uuid": "eebb2cb1-ab65-4ac5-a707-fde22184a2e2", 00:05:05.730 "assigned_rate_limits": { 00:05:05.730 "rw_ios_per_sec": 0, 00:05:05.730 "rw_mbytes_per_sec": 0, 00:05:05.730 "r_mbytes_per_sec": 0, 00:05:05.730 "w_mbytes_per_sec": 0 00:05:05.730 }, 00:05:05.730 "claimed": true, 00:05:05.730 "claim_type": "exclusive_write", 00:05:05.730 "zoned": false, 00:05:05.730 "supported_io_types": { 00:05:05.730 "read": true, 00:05:05.730 "write": true, 00:05:05.730 "unmap": true, 00:05:05.730 "flush": true, 00:05:05.730 "reset": true, 00:05:05.730 "nvme_admin": false, 00:05:05.730 "nvme_io": false, 00:05:05.730 "nvme_io_md": false, 00:05:05.730 "write_zeroes": true, 00:05:05.730 "zcopy": true, 00:05:05.730 "get_zone_info": false, 00:05:05.730 "zone_management": false, 00:05:05.730 "zone_append": false, 00:05:05.730 "compare": false, 00:05:05.730 "compare_and_write": false, 00:05:05.730 "abort": true, 00:05:05.730 "seek_hole": false, 00:05:05.730 "seek_data": false, 00:05:05.730 "copy": true, 00:05:05.730 "nvme_iov_md": false 00:05:05.730 }, 00:05:05.730 "memory_domains": [ 00:05:05.730 { 00:05:05.730 "dma_device_id": "system", 00:05:05.730 "dma_device_type": 1 00:05:05.730 }, 00:05:05.730 { 00:05:05.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.730 "dma_device_type": 2 00:05:05.730 } 00:05:05.730 ], 00:05:05.730 "driver_specific": {} 00:05:05.730 }, 00:05:05.730 { 00:05:05.730 "name": "Passthru0", 00:05:05.730 "aliases": [ 00:05:05.730 "56c246e7-6e60-51a4-9331-2689e581c6ae" 00:05:05.730 ], 00:05:05.730 "product_name": "passthru", 00:05:05.730 "block_size": 512, 00:05:05.730 "num_blocks": 16384, 00:05:05.730 "uuid": "56c246e7-6e60-51a4-9331-2689e581c6ae", 00:05:05.730 "assigned_rate_limits": { 00:05:05.730 "rw_ios_per_sec": 0, 00:05:05.730 "rw_mbytes_per_sec": 0, 00:05:05.730 "r_mbytes_per_sec": 0, 00:05:05.730 "w_mbytes_per_sec": 0 00:05:05.730 }, 00:05:05.730 "claimed": false, 00:05:05.730 "zoned": false, 00:05:05.730 "supported_io_types": { 00:05:05.730 "read": true, 00:05:05.730 "write": true, 00:05:05.730 "unmap": true, 00:05:05.730 "flush": true, 00:05:05.730 "reset": true, 00:05:05.730 "nvme_admin": false, 00:05:05.730 "nvme_io": false, 00:05:05.730 "nvme_io_md": false, 00:05:05.730 "write_zeroes": true, 00:05:05.730 "zcopy": true, 00:05:05.730 "get_zone_info": false, 00:05:05.730 "zone_management": false, 00:05:05.730 "zone_append": false, 00:05:05.730 "compare": false, 00:05:05.730 "compare_and_write": false, 00:05:05.730 "abort": true, 00:05:05.730 "seek_hole": false, 00:05:05.730 "seek_data": false, 00:05:05.730 "copy": true, 00:05:05.730 "nvme_iov_md": false 00:05:05.730 }, 00:05:05.730 "memory_domains": [ 00:05:05.730 { 00:05:05.730 "dma_device_id": "system", 00:05:05.730 "dma_device_type": 1 00:05:05.730 }, 00:05:05.730 { 00:05:05.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.730 "dma_device_type": 2 00:05:05.730 } 00:05:05.730 ], 00:05:05.730 "driver_specific": { 00:05:05.730 "passthru": { 00:05:05.730 "name": "Passthru0", 00:05:05.730 "base_bdev_name": "Malloc0" 00:05:05.730 } 00:05:05.730 } 00:05:05.730 } 00:05:05.730 ]' 00:05:05.730 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:05.730 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:05.730 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.730 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.730 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.730 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.730 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:05.730 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:05.730 20:55:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:05.730 00:05:05.730 real 0m0.273s 00:05:05.730 user 0m0.181s 00:05:05.730 sys 0m0.035s 00:05:05.731 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.731 20:55:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.731 ************************************ 00:05:05.731 END TEST rpc_integrity 00:05:05.731 ************************************ 00:05:05.731 20:55:33 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:05.731 20:55:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:05.731 20:55:33 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.731 20:55:33 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.731 20:55:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.991 ************************************ 00:05:05.991 START TEST rpc_plugins 00:05:05.991 ************************************ 00:05:05.991 20:55:33 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:05.991 20:55:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:05.991 20:55:33 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.991 20:55:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:05.991 20:55:33 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.991 20:55:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:05.991 20:55:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:05.991 20:55:33 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.991 20:55:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:05.991 20:55:33 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.991 20:55:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:05.991 { 00:05:05.991 "name": "Malloc1", 00:05:05.991 "aliases": [ 00:05:05.991 "93eeb4f8-84b3-43a7-b337-20ad99575bd3" 00:05:05.991 ], 00:05:05.991 "product_name": "Malloc disk", 00:05:05.991 "block_size": 4096, 00:05:05.991 "num_blocks": 256, 00:05:05.991 "uuid": "93eeb4f8-84b3-43a7-b337-20ad99575bd3", 00:05:05.991 "assigned_rate_limits": { 00:05:05.991 "rw_ios_per_sec": 0, 00:05:05.991 "rw_mbytes_per_sec": 0, 00:05:05.991 "r_mbytes_per_sec": 0, 00:05:05.991 "w_mbytes_per_sec": 0 00:05:05.991 }, 00:05:05.991 "claimed": false, 00:05:05.991 "zoned": false, 00:05:05.991 "supported_io_types": { 00:05:05.991 "read": true, 00:05:05.991 "write": true, 00:05:05.992 "unmap": true, 00:05:05.992 "flush": true, 00:05:05.992 "reset": true, 00:05:05.992 "nvme_admin": false, 00:05:05.992 "nvme_io": false, 00:05:05.992 "nvme_io_md": false, 00:05:05.992 "write_zeroes": true, 00:05:05.992 "zcopy": true, 00:05:05.992 "get_zone_info": false, 00:05:05.992 "zone_management": false, 00:05:05.992 "zone_append": false, 00:05:05.992 "compare": false, 00:05:05.992 "compare_and_write": false, 00:05:05.992 "abort": true, 00:05:05.992 "seek_hole": false, 00:05:05.992 "seek_data": false, 00:05:05.992 "copy": true, 00:05:05.992 "nvme_iov_md": false 00:05:05.992 }, 00:05:05.992 "memory_domains": [ 00:05:05.992 { 00:05:05.992 "dma_device_id": "system", 00:05:05.992 "dma_device_type": 1 00:05:05.992 }, 00:05:05.992 { 00:05:05.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.992 "dma_device_type": 2 00:05:05.992 } 00:05:05.992 ], 00:05:05.992 "driver_specific": {} 00:05:05.992 } 00:05:05.992 ]' 00:05:05.992 20:55:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:05.992 20:55:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:05.992 20:55:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:05.992 20:55:33 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.992 20:55:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:05.992 20:55:33 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.992 20:55:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:05.992 20:55:33 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.992 20:55:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:05.992 20:55:33 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.992 20:55:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:05.992 20:55:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:05.992 20:55:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:05.992 00:05:05.992 real 0m0.147s 00:05:05.992 user 0m0.093s 00:05:05.992 sys 0m0.020s 00:05:05.992 20:55:33 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.992 20:55:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:05.992 ************************************ 00:05:05.992 END TEST rpc_plugins 00:05:05.992 ************************************ 00:05:05.992 20:55:33 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:05.992 20:55:33 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:05.992 20:55:33 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.992 20:55:33 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.992 20:55:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.992 ************************************ 00:05:05.992 START TEST rpc_trace_cmd_test 00:05:05.992 ************************************ 00:05:05.992 20:55:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:05.992 20:55:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:05.992 20:55:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:05.992 20:55:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.992 20:55:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:05.992 20:55:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.992 20:55:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:05.992 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1731461", 00:05:05.992 "tpoint_group_mask": "0x8", 00:05:05.992 "iscsi_conn": { 00:05:05.992 "mask": "0x2", 00:05:05.992 "tpoint_mask": "0x0" 00:05:05.992 }, 00:05:05.992 "scsi": { 00:05:05.992 "mask": "0x4", 00:05:05.992 "tpoint_mask": "0x0" 00:05:05.992 }, 00:05:05.992 "bdev": { 00:05:05.992 "mask": "0x8", 00:05:05.992 "tpoint_mask": "0xffffffffffffffff" 00:05:05.992 }, 00:05:05.992 "nvmf_rdma": { 00:05:05.992 "mask": "0x10", 00:05:05.992 "tpoint_mask": "0x0" 00:05:05.992 }, 00:05:05.992 "nvmf_tcp": { 00:05:05.992 "mask": "0x20", 00:05:05.992 "tpoint_mask": "0x0" 00:05:05.992 }, 00:05:05.992 "ftl": { 00:05:05.992 "mask": "0x40", 00:05:05.992 "tpoint_mask": "0x0" 00:05:05.992 }, 00:05:05.992 "blobfs": { 00:05:05.992 "mask": "0x80", 00:05:05.992 "tpoint_mask": "0x0" 00:05:05.992 }, 00:05:05.992 "dsa": { 00:05:05.992 "mask": "0x200", 00:05:05.992 "tpoint_mask": "0x0" 00:05:05.992 }, 00:05:05.992 "thread": { 00:05:05.992 "mask": "0x400", 00:05:05.992 "tpoint_mask": "0x0" 00:05:05.992 }, 00:05:05.992 "nvme_pcie": { 00:05:05.992 "mask": "0x800", 00:05:05.992 "tpoint_mask": "0x0" 00:05:05.992 }, 00:05:05.992 "iaa": { 00:05:05.992 "mask": "0x1000", 00:05:05.992 "tpoint_mask": "0x0" 00:05:05.992 }, 00:05:05.992 "nvme_tcp": { 00:05:05.992 "mask": "0x2000", 00:05:05.992 "tpoint_mask": "0x0" 00:05:05.992 }, 00:05:05.992 "bdev_nvme": { 00:05:05.992 "mask": "0x4000", 00:05:05.992 "tpoint_mask": "0x0" 00:05:05.992 }, 00:05:05.992 "sock": { 00:05:05.992 "mask": "0x8000", 00:05:05.992 "tpoint_mask": "0x0" 00:05:05.992 } 00:05:05.992 }' 00:05:05.992 20:55:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:06.251 20:55:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:06.251 20:55:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:06.251 20:55:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:06.251 20:55:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:06.251 20:55:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:06.251 20:55:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:06.251 20:55:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:06.251 20:55:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:06.251 20:55:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:06.251 00:05:06.251 real 0m0.239s 00:05:06.251 user 0m0.208s 00:05:06.251 sys 0m0.024s 00:05:06.251 20:55:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.251 20:55:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:06.251 ************************************ 00:05:06.251 END TEST rpc_trace_cmd_test 00:05:06.251 ************************************ 00:05:06.251 20:55:33 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:06.251 20:55:33 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:06.251 20:55:33 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:06.251 20:55:33 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:06.252 20:55:33 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.252 20:55:33 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.252 20:55:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.511 ************************************ 00:05:06.511 START TEST rpc_daemon_integrity 00:05:06.511 ************************************ 00:05:06.511 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:06.511 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:06.511 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.511 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.511 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.511 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:06.511 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:06.511 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:06.511 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:06.511 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.511 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.511 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.511 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:06.511 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:06.511 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.511 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.511 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.511 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:06.511 { 00:05:06.511 "name": "Malloc2", 00:05:06.511 "aliases": [ 00:05:06.511 "e8e35747-28a5-4a23-8d08-eacec8db9acb" 00:05:06.511 ], 00:05:06.511 "product_name": "Malloc disk", 00:05:06.511 "block_size": 512, 00:05:06.511 "num_blocks": 16384, 00:05:06.511 "uuid": "e8e35747-28a5-4a23-8d08-eacec8db9acb", 00:05:06.511 "assigned_rate_limits": { 00:05:06.511 "rw_ios_per_sec": 0, 00:05:06.511 "rw_mbytes_per_sec": 0, 00:05:06.511 "r_mbytes_per_sec": 0, 00:05:06.511 "w_mbytes_per_sec": 0 00:05:06.511 }, 00:05:06.511 "claimed": false, 00:05:06.511 "zoned": false, 00:05:06.511 "supported_io_types": { 00:05:06.511 "read": true, 00:05:06.511 "write": true, 00:05:06.511 "unmap": true, 00:05:06.511 "flush": true, 00:05:06.511 "reset": true, 00:05:06.511 "nvme_admin": false, 00:05:06.511 "nvme_io": false, 00:05:06.511 "nvme_io_md": false, 00:05:06.511 "write_zeroes": true, 00:05:06.511 "zcopy": true, 00:05:06.511 "get_zone_info": false, 00:05:06.511 "zone_management": false, 00:05:06.511 "zone_append": false, 00:05:06.511 "compare": false, 00:05:06.511 "compare_and_write": false, 00:05:06.511 "abort": true, 00:05:06.511 "seek_hole": false, 00:05:06.511 "seek_data": false, 00:05:06.511 "copy": true, 00:05:06.511 "nvme_iov_md": false 00:05:06.511 }, 00:05:06.511 "memory_domains": [ 00:05:06.511 { 00:05:06.511 "dma_device_id": "system", 00:05:06.511 "dma_device_type": 1 00:05:06.511 }, 00:05:06.511 { 00:05:06.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.512 "dma_device_type": 2 00:05:06.512 } 00:05:06.512 ], 00:05:06.512 "driver_specific": {} 00:05:06.512 } 00:05:06.512 ]' 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.512 [2024-07-15 20:55:33.705745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:06.512 [2024-07-15 20:55:33.705773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:06.512 [2024-07-15 20:55:33.705786] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x199a700 00:05:06.512 [2024-07-15 20:55:33.705793] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:06.512 [2024-07-15 20:55:33.706989] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:06.512 [2024-07-15 20:55:33.707008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:06.512 Passthru0 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:06.512 { 00:05:06.512 "name": "Malloc2", 00:05:06.512 "aliases": [ 00:05:06.512 "e8e35747-28a5-4a23-8d08-eacec8db9acb" 00:05:06.512 ], 00:05:06.512 "product_name": "Malloc disk", 00:05:06.512 "block_size": 512, 00:05:06.512 "num_blocks": 16384, 00:05:06.512 "uuid": "e8e35747-28a5-4a23-8d08-eacec8db9acb", 00:05:06.512 "assigned_rate_limits": { 00:05:06.512 "rw_ios_per_sec": 0, 00:05:06.512 "rw_mbytes_per_sec": 0, 00:05:06.512 "r_mbytes_per_sec": 0, 00:05:06.512 "w_mbytes_per_sec": 0 00:05:06.512 }, 00:05:06.512 "claimed": true, 00:05:06.512 "claim_type": "exclusive_write", 00:05:06.512 "zoned": false, 00:05:06.512 "supported_io_types": { 00:05:06.512 "read": true, 00:05:06.512 "write": true, 00:05:06.512 "unmap": true, 00:05:06.512 "flush": true, 00:05:06.512 "reset": true, 00:05:06.512 "nvme_admin": false, 00:05:06.512 "nvme_io": false, 00:05:06.512 "nvme_io_md": false, 00:05:06.512 "write_zeroes": true, 00:05:06.512 "zcopy": true, 00:05:06.512 "get_zone_info": false, 00:05:06.512 "zone_management": false, 00:05:06.512 "zone_append": false, 00:05:06.512 "compare": false, 00:05:06.512 "compare_and_write": false, 00:05:06.512 "abort": true, 00:05:06.512 "seek_hole": false, 00:05:06.512 "seek_data": false, 00:05:06.512 "copy": true, 00:05:06.512 "nvme_iov_md": false 00:05:06.512 }, 00:05:06.512 "memory_domains": [ 00:05:06.512 { 00:05:06.512 "dma_device_id": "system", 00:05:06.512 "dma_device_type": 1 00:05:06.512 }, 00:05:06.512 { 00:05:06.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.512 "dma_device_type": 2 00:05:06.512 } 00:05:06.512 ], 00:05:06.512 "driver_specific": {} 00:05:06.512 }, 00:05:06.512 { 00:05:06.512 "name": "Passthru0", 00:05:06.512 "aliases": [ 00:05:06.512 "69b96b28-a1fd-59fb-8bae-a84a72e74515" 00:05:06.512 ], 00:05:06.512 "product_name": "passthru", 00:05:06.512 "block_size": 512, 00:05:06.512 "num_blocks": 16384, 00:05:06.512 "uuid": "69b96b28-a1fd-59fb-8bae-a84a72e74515", 00:05:06.512 "assigned_rate_limits": { 00:05:06.512 "rw_ios_per_sec": 0, 00:05:06.512 "rw_mbytes_per_sec": 0, 00:05:06.512 "r_mbytes_per_sec": 0, 00:05:06.512 "w_mbytes_per_sec": 0 00:05:06.512 }, 00:05:06.512 "claimed": false, 00:05:06.512 "zoned": false, 00:05:06.512 "supported_io_types": { 00:05:06.512 "read": true, 00:05:06.512 "write": true, 00:05:06.512 "unmap": true, 00:05:06.512 "flush": true, 00:05:06.512 "reset": true, 00:05:06.512 "nvme_admin": false, 00:05:06.512 "nvme_io": false, 00:05:06.512 "nvme_io_md": false, 00:05:06.512 "write_zeroes": true, 00:05:06.512 "zcopy": true, 00:05:06.512 "get_zone_info": false, 00:05:06.512 "zone_management": false, 00:05:06.512 "zone_append": false, 00:05:06.512 "compare": false, 00:05:06.512 "compare_and_write": false, 00:05:06.512 "abort": true, 00:05:06.512 "seek_hole": false, 00:05:06.512 "seek_data": false, 00:05:06.512 "copy": true, 00:05:06.512 "nvme_iov_md": false 00:05:06.512 }, 00:05:06.512 "memory_domains": [ 00:05:06.512 { 00:05:06.512 "dma_device_id": "system", 00:05:06.512 "dma_device_type": 1 00:05:06.512 }, 00:05:06.512 { 00:05:06.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.512 "dma_device_type": 2 00:05:06.512 } 00:05:06.512 ], 00:05:06.512 "driver_specific": { 00:05:06.512 "passthru": { 00:05:06.512 "name": "Passthru0", 00:05:06.512 "base_bdev_name": "Malloc2" 00:05:06.512 } 00:05:06.512 } 00:05:06.512 } 00:05:06.512 ]' 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:06.512 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:06.772 20:55:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:06.772 00:05:06.772 real 0m0.257s 00:05:06.772 user 0m0.172s 00:05:06.772 sys 0m0.028s 00:05:06.772 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.772 20:55:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.772 ************************************ 00:05:06.772 END TEST rpc_daemon_integrity 00:05:06.772 ************************************ 00:05:06.772 20:55:33 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:06.772 20:55:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:06.772 20:55:33 rpc -- rpc/rpc.sh@84 -- # killprocess 1731461 00:05:06.772 20:55:33 rpc -- common/autotest_common.sh@948 -- # '[' -z 1731461 ']' 00:05:06.772 20:55:33 rpc -- common/autotest_common.sh@952 -- # kill -0 1731461 00:05:06.772 20:55:33 rpc -- common/autotest_common.sh@953 -- # uname 00:05:06.772 20:55:33 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.772 20:55:33 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1731461 00:05:06.772 20:55:33 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:06.772 20:55:33 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:06.772 20:55:33 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1731461' 00:05:06.772 killing process with pid 1731461 00:05:06.772 20:55:33 rpc -- common/autotest_common.sh@967 -- # kill 1731461 00:05:06.772 20:55:33 rpc -- common/autotest_common.sh@972 -- # wait 1731461 00:05:07.032 00:05:07.032 real 0m2.349s 00:05:07.032 user 0m3.094s 00:05:07.032 sys 0m0.638s 00:05:07.032 20:55:34 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.032 20:55:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.032 ************************************ 00:05:07.032 END TEST rpc 00:05:07.032 ************************************ 00:05:07.032 20:55:34 -- common/autotest_common.sh@1142 -- # return 0 00:05:07.032 20:55:34 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:07.032 20:55:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.032 20:55:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.032 20:55:34 -- common/autotest_common.sh@10 -- # set +x 00:05:07.032 ************************************ 00:05:07.032 START TEST skip_rpc 00:05:07.032 ************************************ 00:05:07.033 20:55:34 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:07.033 * Looking for test storage... 00:05:07.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:07.033 20:55:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:07.033 20:55:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:07.033 20:55:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:07.033 20:55:34 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.033 20:55:34 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.033 20:55:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.293 ************************************ 00:05:07.293 START TEST skip_rpc 00:05:07.293 ************************************ 00:05:07.293 20:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:07.293 20:55:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1731980 00:05:07.293 20:55:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.293 20:55:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:07.293 20:55:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:07.293 [2024-07-15 20:55:34.399666] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:07.293 [2024-07-15 20:55:34.399721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731980 ] 00:05:07.293 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.293 [2024-07-15 20:55:34.469736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.293 [2024-07-15 20:55:34.542751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1731980 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1731980 ']' 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1731980 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1731980 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1731980' 00:05:12.579 killing process with pid 1731980 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1731980 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1731980 00:05:12.579 00:05:12.579 real 0m5.278s 00:05:12.579 user 0m5.078s 00:05:12.579 sys 0m0.236s 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.579 20:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.579 ************************************ 00:05:12.579 END TEST skip_rpc 00:05:12.579 ************************************ 00:05:12.579 20:55:39 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:12.579 20:55:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:12.579 20:55:39 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.579 20:55:39 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.579 20:55:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.579 ************************************ 00:05:12.579 START TEST skip_rpc_with_json 00:05:12.579 ************************************ 00:05:12.579 20:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:12.579 20:55:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:12.579 20:55:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1733139 00:05:12.579 20:55:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.579 20:55:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1733139 00:05:12.579 20:55:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.579 20:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1733139 ']' 00:05:12.579 20:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.579 20:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.579 20:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.579 20:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.579 20:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.579 [2024-07-15 20:55:39.743148] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:12.579 [2024-07-15 20:55:39.743203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1733139 ] 00:05:12.579 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.579 [2024-07-15 20:55:39.810476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.839 [2024-07-15 20:55:39.878453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.410 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.410 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:13.410 20:55:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:13.410 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.410 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:13.410 [2024-07-15 20:55:40.514441] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:13.410 request: 00:05:13.410 { 00:05:13.410 "trtype": "tcp", 00:05:13.410 "method": "nvmf_get_transports", 00:05:13.410 "req_id": 1 00:05:13.410 } 00:05:13.410 Got JSON-RPC error response 00:05:13.410 response: 00:05:13.410 { 00:05:13.410 "code": -19, 00:05:13.410 "message": "No such device" 00:05:13.410 } 00:05:13.410 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:13.410 20:55:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:13.410 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.410 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:13.410 [2024-07-15 20:55:40.526567] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:13.410 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.410 20:55:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:13.410 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.410 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:13.410 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.410 20:55:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:13.410 { 00:05:13.410 "subsystems": [ 00:05:13.410 { 00:05:13.410 "subsystem": "vfio_user_target", 00:05:13.410 "config": null 00:05:13.410 }, 00:05:13.410 { 00:05:13.410 "subsystem": "keyring", 00:05:13.410 "config": [] 00:05:13.410 }, 00:05:13.410 { 00:05:13.410 "subsystem": "iobuf", 00:05:13.410 "config": [ 00:05:13.410 { 00:05:13.410 "method": "iobuf_set_options", 00:05:13.410 "params": { 00:05:13.410 "small_pool_count": 8192, 00:05:13.410 "large_pool_count": 1024, 00:05:13.410 "small_bufsize": 8192, 00:05:13.410 "large_bufsize": 135168 00:05:13.410 } 00:05:13.410 } 00:05:13.410 ] 00:05:13.410 }, 00:05:13.410 { 00:05:13.410 "subsystem": "sock", 00:05:13.410 "config": [ 00:05:13.410 { 00:05:13.410 "method": "sock_set_default_impl", 00:05:13.410 "params": { 00:05:13.410 "impl_name": "posix" 00:05:13.410 } 00:05:13.410 }, 00:05:13.410 { 00:05:13.410 "method": "sock_impl_set_options", 00:05:13.410 "params": { 00:05:13.410 "impl_name": "ssl", 00:05:13.410 "recv_buf_size": 4096, 00:05:13.410 "send_buf_size": 4096, 00:05:13.410 "enable_recv_pipe": true, 00:05:13.410 "enable_quickack": false, 00:05:13.410 "enable_placement_id": 0, 00:05:13.410 "enable_zerocopy_send_server": true, 00:05:13.410 "enable_zerocopy_send_client": false, 00:05:13.410 "zerocopy_threshold": 0, 00:05:13.410 "tls_version": 0, 00:05:13.410 "enable_ktls": false 00:05:13.410 } 00:05:13.410 }, 00:05:13.410 { 00:05:13.410 "method": "sock_impl_set_options", 00:05:13.410 "params": { 00:05:13.410 "impl_name": "posix", 00:05:13.410 "recv_buf_size": 2097152, 00:05:13.410 "send_buf_size": 2097152, 00:05:13.410 "enable_recv_pipe": true, 00:05:13.410 "enable_quickack": false, 00:05:13.410 "enable_placement_id": 0, 00:05:13.410 "enable_zerocopy_send_server": true, 00:05:13.410 "enable_zerocopy_send_client": false, 00:05:13.410 "zerocopy_threshold": 0, 00:05:13.410 "tls_version": 0, 00:05:13.410 "enable_ktls": false 00:05:13.410 } 00:05:13.410 } 00:05:13.410 ] 00:05:13.410 }, 00:05:13.410 { 00:05:13.410 "subsystem": "vmd", 00:05:13.410 "config": [] 00:05:13.410 }, 00:05:13.410 { 00:05:13.410 "subsystem": "accel", 00:05:13.410 "config": [ 00:05:13.410 { 00:05:13.410 "method": "accel_set_options", 00:05:13.410 "params": { 00:05:13.410 "small_cache_size": 128, 00:05:13.410 "large_cache_size": 16, 00:05:13.410 "task_count": 2048, 00:05:13.410 "sequence_count": 2048, 00:05:13.410 "buf_count": 2048 00:05:13.410 } 00:05:13.410 } 00:05:13.410 ] 00:05:13.410 }, 00:05:13.410 { 00:05:13.410 "subsystem": "bdev", 00:05:13.410 "config": [ 00:05:13.410 { 00:05:13.410 "method": "bdev_set_options", 00:05:13.410 "params": { 00:05:13.410 "bdev_io_pool_size": 65535, 00:05:13.410 "bdev_io_cache_size": 256, 00:05:13.410 "bdev_auto_examine": true, 00:05:13.410 "iobuf_small_cache_size": 128, 00:05:13.410 "iobuf_large_cache_size": 16 00:05:13.410 } 00:05:13.410 }, 00:05:13.410 { 00:05:13.410 "method": "bdev_raid_set_options", 00:05:13.410 "params": { 00:05:13.410 "process_window_size_kb": 1024 00:05:13.410 } 00:05:13.411 }, 00:05:13.411 { 00:05:13.411 "method": "bdev_iscsi_set_options", 00:05:13.411 "params": { 00:05:13.411 "timeout_sec": 30 00:05:13.411 } 00:05:13.411 }, 00:05:13.411 { 00:05:13.411 "method": "bdev_nvme_set_options", 00:05:13.411 "params": { 00:05:13.411 "action_on_timeout": "none", 00:05:13.411 "timeout_us": 0, 00:05:13.411 "timeout_admin_us": 0, 00:05:13.411 "keep_alive_timeout_ms": 10000, 00:05:13.411 "arbitration_burst": 0, 00:05:13.411 "low_priority_weight": 0, 00:05:13.411 "medium_priority_weight": 0, 00:05:13.411 "high_priority_weight": 0, 00:05:13.411 "nvme_adminq_poll_period_us": 10000, 00:05:13.411 "nvme_ioq_poll_period_us": 0, 00:05:13.411 "io_queue_requests": 0, 00:05:13.411 "delay_cmd_submit": true, 00:05:13.411 "transport_retry_count": 4, 00:05:13.411 "bdev_retry_count": 3, 00:05:13.411 "transport_ack_timeout": 0, 00:05:13.411 "ctrlr_loss_timeout_sec": 0, 00:05:13.411 "reconnect_delay_sec": 0, 00:05:13.411 "fast_io_fail_timeout_sec": 0, 00:05:13.411 "disable_auto_failback": false, 00:05:13.411 "generate_uuids": false, 00:05:13.411 "transport_tos": 0, 00:05:13.411 "nvme_error_stat": false, 00:05:13.411 "rdma_srq_size": 0, 00:05:13.411 "io_path_stat": false, 00:05:13.411 "allow_accel_sequence": false, 00:05:13.411 "rdma_max_cq_size": 0, 00:05:13.411 "rdma_cm_event_timeout_ms": 0, 00:05:13.411 "dhchap_digests": [ 00:05:13.411 "sha256", 00:05:13.411 "sha384", 00:05:13.411 "sha512" 00:05:13.411 ], 00:05:13.411 "dhchap_dhgroups": [ 00:05:13.411 "null", 00:05:13.411 "ffdhe2048", 00:05:13.411 "ffdhe3072", 00:05:13.411 "ffdhe4096", 00:05:13.411 "ffdhe6144", 00:05:13.411 "ffdhe8192" 00:05:13.411 ] 00:05:13.411 } 00:05:13.411 }, 00:05:13.411 { 00:05:13.411 "method": "bdev_nvme_set_hotplug", 00:05:13.411 "params": { 00:05:13.411 "period_us": 100000, 00:05:13.411 "enable": false 00:05:13.411 } 00:05:13.411 }, 00:05:13.411 { 00:05:13.411 "method": "bdev_wait_for_examine" 00:05:13.411 } 00:05:13.411 ] 00:05:13.411 }, 00:05:13.411 { 00:05:13.411 "subsystem": "scsi", 00:05:13.411 "config": null 00:05:13.411 }, 00:05:13.411 { 00:05:13.411 "subsystem": "scheduler", 00:05:13.411 "config": [ 00:05:13.411 { 00:05:13.411 "method": "framework_set_scheduler", 00:05:13.411 "params": { 00:05:13.411 "name": "static" 00:05:13.411 } 00:05:13.411 } 00:05:13.411 ] 00:05:13.411 }, 00:05:13.411 { 00:05:13.411 "subsystem": "vhost_scsi", 00:05:13.411 "config": [] 00:05:13.411 }, 00:05:13.411 { 00:05:13.411 "subsystem": "vhost_blk", 00:05:13.411 "config": [] 00:05:13.411 }, 00:05:13.411 { 00:05:13.411 "subsystem": "ublk", 00:05:13.411 "config": [] 00:05:13.411 }, 00:05:13.411 { 00:05:13.411 "subsystem": "nbd", 00:05:13.411 "config": [] 00:05:13.411 }, 00:05:13.411 { 00:05:13.411 "subsystem": "nvmf", 00:05:13.411 "config": [ 00:05:13.411 { 00:05:13.411 "method": "nvmf_set_config", 00:05:13.411 "params": { 00:05:13.411 "discovery_filter": "match_any", 00:05:13.411 "admin_cmd_passthru": { 00:05:13.411 "identify_ctrlr": false 00:05:13.411 } 00:05:13.411 } 00:05:13.411 }, 00:05:13.411 { 00:05:13.411 "method": "nvmf_set_max_subsystems", 00:05:13.411 "params": { 00:05:13.411 "max_subsystems": 1024 00:05:13.411 } 00:05:13.411 }, 00:05:13.411 { 00:05:13.411 "method": "nvmf_set_crdt", 00:05:13.411 "params": { 00:05:13.411 "crdt1": 0, 00:05:13.411 "crdt2": 0, 00:05:13.411 "crdt3": 0 00:05:13.411 } 00:05:13.411 }, 00:05:13.411 { 00:05:13.411 "method": "nvmf_create_transport", 00:05:13.411 "params": { 00:05:13.411 "trtype": "TCP", 00:05:13.411 "max_queue_depth": 128, 00:05:13.411 "max_io_qpairs_per_ctrlr": 127, 00:05:13.411 "in_capsule_data_size": 4096, 00:05:13.411 "max_io_size": 131072, 00:05:13.411 "io_unit_size": 131072, 00:05:13.411 "max_aq_depth": 128, 00:05:13.411 "num_shared_buffers": 511, 00:05:13.411 "buf_cache_size": 4294967295, 00:05:13.411 "dif_insert_or_strip": false, 00:05:13.411 "zcopy": false, 00:05:13.411 "c2h_success": true, 00:05:13.411 "sock_priority": 0, 00:05:13.411 "abort_timeout_sec": 1, 00:05:13.411 "ack_timeout": 0, 00:05:13.411 "data_wr_pool_size": 0 00:05:13.411 } 00:05:13.411 } 00:05:13.411 ] 00:05:13.411 }, 00:05:13.411 { 00:05:13.411 "subsystem": "iscsi", 00:05:13.411 "config": [ 00:05:13.411 { 00:05:13.411 "method": "iscsi_set_options", 00:05:13.411 "params": { 00:05:13.411 "node_base": "iqn.2016-06.io.spdk", 00:05:13.411 "max_sessions": 128, 00:05:13.411 "max_connections_per_session": 2, 00:05:13.411 "max_queue_depth": 64, 00:05:13.411 "default_time2wait": 2, 00:05:13.411 "default_time2retain": 20, 00:05:13.411 "first_burst_length": 8192, 00:05:13.411 "immediate_data": true, 00:05:13.411 "allow_duplicated_isid": false, 00:05:13.411 "error_recovery_level": 0, 00:05:13.411 "nop_timeout": 60, 00:05:13.411 "nop_in_interval": 30, 00:05:13.411 "disable_chap": false, 00:05:13.411 "require_chap": false, 00:05:13.411 "mutual_chap": false, 00:05:13.411 "chap_group": 0, 00:05:13.411 "max_large_datain_per_connection": 64, 00:05:13.411 "max_r2t_per_connection": 4, 00:05:13.411 "pdu_pool_size": 36864, 00:05:13.411 "immediate_data_pool_size": 16384, 00:05:13.411 "data_out_pool_size": 2048 00:05:13.411 } 00:05:13.411 } 00:05:13.411 ] 00:05:13.411 } 00:05:13.411 ] 00:05:13.411 } 00:05:13.411 20:55:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:13.411 20:55:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1733139 00:05:13.411 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1733139 ']' 00:05:13.411 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1733139 00:05:13.411 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:13.411 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.672 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1733139 00:05:13.672 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.672 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.672 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1733139' 00:05:13.672 killing process with pid 1733139 00:05:13.672 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1733139 00:05:13.672 20:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1733139 00:05:13.672 20:55:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1733356 00:05:13.672 20:55:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:13.672 20:55:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:18.958 20:55:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1733356 00:05:18.958 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1733356 ']' 00:05:18.958 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1733356 00:05:18.958 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:18.958 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.958 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1733356 00:05:18.958 20:55:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.958 20:55:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.958 20:55:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1733356' 00:05:18.958 killing process with pid 1733356 00:05:18.958 20:55:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1733356 00:05:18.958 20:55:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1733356 00:05:18.958 20:55:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:18.958 20:55:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:18.958 00:05:18.958 real 0m6.549s 00:05:18.958 user 0m6.422s 00:05:18.958 sys 0m0.536s 00:05:18.958 20:55:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.958 20:55:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.958 ************************************ 00:05:18.958 END TEST skip_rpc_with_json 00:05:18.958 ************************************ 00:05:19.219 20:55:46 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:19.219 20:55:46 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:19.219 20:55:46 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.219 20:55:46 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.219 20:55:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.219 ************************************ 00:05:19.219 START TEST skip_rpc_with_delay 00:05:19.219 ************************************ 00:05:19.219 20:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:19.219 20:55:46 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:19.219 20:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:19.219 20:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:19.219 20:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.219 20:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.219 20:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.219 20:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.219 20:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.219 20:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.219 20:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.219 20:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:19.219 20:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:19.219 [2024-07-15 20:55:46.370183] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:19.219 [2024-07-15 20:55:46.370281] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:19.219 20:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:19.219 20:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:19.219 20:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:19.219 20:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:19.219 00:05:19.219 real 0m0.074s 00:05:19.219 user 0m0.045s 00:05:19.219 sys 0m0.029s 00:05:19.219 20:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.219 20:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:19.219 ************************************ 00:05:19.219 END TEST skip_rpc_with_delay 00:05:19.219 ************************************ 00:05:19.219 20:55:46 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:19.219 20:55:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:19.219 20:55:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:19.219 20:55:46 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:19.219 20:55:46 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.219 20:55:46 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.219 20:55:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.219 ************************************ 00:05:19.219 START TEST exit_on_failed_rpc_init 00:05:19.219 ************************************ 00:05:19.219 20:55:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:19.219 20:55:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1734588 00:05:19.219 20:55:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1734588 00:05:19.220 20:55:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.220 20:55:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1734588 ']' 00:05:19.220 20:55:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.220 20:55:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.220 20:55:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.220 20:55:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.220 20:55:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:19.480 [2024-07-15 20:55:46.520157] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:19.480 [2024-07-15 20:55:46.520221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1734588 ] 00:05:19.480 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.480 [2024-07-15 20:55:46.590433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.480 [2024-07-15 20:55:46.665226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.051 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.051 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:20.051 20:55:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.051 20:55:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:20.051 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:20.051 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:20.051 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.051 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.051 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.051 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.051 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.051 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.051 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.051 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:20.051 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:20.311 [2024-07-15 20:55:47.346057] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:20.311 [2024-07-15 20:55:47.346110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1734757 ] 00:05:20.311 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.311 [2024-07-15 20:55:47.426986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.311 [2024-07-15 20:55:47.491019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.311 [2024-07-15 20:55:47.491081] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:20.312 [2024-07-15 20:55:47.491091] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:20.312 [2024-07-15 20:55:47.491097] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:20.312 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:20.312 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:20.312 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:20.312 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:20.312 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:20.312 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:20.312 20:55:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:20.312 20:55:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1734588 00:05:20.312 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1734588 ']' 00:05:20.312 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1734588 00:05:20.312 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:20.312 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.312 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1734588 00:05:20.572 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:20.572 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:20.572 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1734588' 00:05:20.572 killing process with pid 1734588 00:05:20.572 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1734588 00:05:20.572 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1734588 00:05:20.572 00:05:20.572 real 0m1.348s 00:05:20.572 user 0m1.561s 00:05:20.572 sys 0m0.390s 00:05:20.572 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.572 20:55:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:20.572 ************************************ 00:05:20.572 END TEST exit_on_failed_rpc_init 00:05:20.572 ************************************ 00:05:20.572 20:55:47 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:20.572 20:55:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:20.572 00:05:20.572 real 0m13.655s 00:05:20.572 user 0m13.250s 00:05:20.572 sys 0m1.472s 00:05:20.572 20:55:47 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.572 20:55:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.572 ************************************ 00:05:20.572 END TEST skip_rpc 00:05:20.572 ************************************ 00:05:20.832 20:55:47 -- common/autotest_common.sh@1142 -- # return 0 00:05:20.832 20:55:47 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:20.832 20:55:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.832 20:55:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.832 20:55:47 -- common/autotest_common.sh@10 -- # set +x 00:05:20.832 ************************************ 00:05:20.832 START TEST rpc_client 00:05:20.832 ************************************ 00:05:20.832 20:55:47 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:20.832 * Looking for test storage... 00:05:20.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:20.833 20:55:48 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:20.833 OK 00:05:20.833 20:55:48 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:20.833 00:05:20.833 real 0m0.130s 00:05:20.833 user 0m0.051s 00:05:20.833 sys 0m0.087s 00:05:20.833 20:55:48 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.833 20:55:48 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:20.833 ************************************ 00:05:20.833 END TEST rpc_client 00:05:20.833 ************************************ 00:05:20.833 20:55:48 -- common/autotest_common.sh@1142 -- # return 0 00:05:20.833 20:55:48 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:20.833 20:55:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.833 20:55:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.833 20:55:48 -- common/autotest_common.sh@10 -- # set +x 00:05:21.094 ************************************ 00:05:21.094 START TEST json_config 00:05:21.094 ************************************ 00:05:21.094 20:55:48 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:21.094 20:55:48 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.094 20:55:48 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.094 20:55:48 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.094 20:55:48 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.094 20:55:48 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.094 20:55:48 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.094 20:55:48 json_config -- paths/export.sh@5 -- # export PATH 00:05:21.094 20:55:48 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@47 -- # : 0 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:21.094 20:55:48 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:21.094 INFO: JSON configuration test init 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:21.094 20:55:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:21.094 20:55:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:21.094 20:55:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:21.094 20:55:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.094 20:55:48 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:21.094 20:55:48 json_config -- json_config/common.sh@9 -- # local app=target 00:05:21.094 20:55:48 json_config -- json_config/common.sh@10 -- # shift 00:05:21.094 20:55:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:21.094 20:55:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:21.094 20:55:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:21.094 20:55:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.094 20:55:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.094 20:55:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1735098 00:05:21.094 20:55:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:21.094 Waiting for target to run... 00:05:21.094 20:55:48 json_config -- json_config/common.sh@25 -- # waitforlisten 1735098 /var/tmp/spdk_tgt.sock 00:05:21.094 20:55:48 json_config -- common/autotest_common.sh@829 -- # '[' -z 1735098 ']' 00:05:21.094 20:55:48 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:21.094 20:55:48 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:21.094 20:55:48 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.094 20:55:48 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:21.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:21.094 20:55:48 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.094 20:55:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.094 [2024-07-15 20:55:48.318892] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:21.094 [2024-07-15 20:55:48.318969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735098 ] 00:05:21.094 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.355 [2024-07-15 20:55:48.632213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.615 [2024-07-15 20:55:48.689159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.875 20:55:49 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.875 20:55:49 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:21.875 20:55:49 json_config -- json_config/common.sh@26 -- # echo '' 00:05:21.875 00:05:21.875 20:55:49 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:21.875 20:55:49 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:21.875 20:55:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:21.875 20:55:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.875 20:55:49 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:21.875 20:55:49 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:21.875 20:55:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:21.875 20:55:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.875 20:55:49 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:21.875 20:55:49 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:21.875 20:55:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:22.445 20:55:49 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:22.445 20:55:49 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:22.445 20:55:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:22.445 20:55:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.445 20:55:49 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:22.445 20:55:49 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:22.445 20:55:49 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:22.445 20:55:49 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:22.445 20:55:49 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:22.445 20:55:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:22.706 20:55:49 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:22.706 20:55:49 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:22.706 20:55:49 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:22.706 20:55:49 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:22.706 20:55:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:22.706 20:55:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.706 20:55:49 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:22.706 20:55:49 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:22.706 20:55:49 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:22.706 20:55:49 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:22.706 20:55:49 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:22.706 20:55:49 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:22.706 20:55:49 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:22.706 20:55:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:22.706 20:55:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.706 20:55:49 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:22.706 20:55:49 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:22.706 20:55:49 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:22.706 20:55:49 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:22.706 20:55:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:22.968 MallocForNvmf0 00:05:22.968 20:55:50 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:22.968 20:55:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:22.968 MallocForNvmf1 00:05:22.968 20:55:50 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:22.968 20:55:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:23.230 [2024-07-15 20:55:50.346968] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:23.230 20:55:50 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:23.230 20:55:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:23.490 20:55:50 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:23.490 20:55:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:23.490 20:55:50 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:23.490 20:55:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:23.751 20:55:50 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:23.751 20:55:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:23.751 [2024-07-15 20:55:50.989054] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:23.751 20:55:51 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:23.751 20:55:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:23.751 20:55:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.012 20:55:51 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:24.012 20:55:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:24.012 20:55:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.012 20:55:51 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:24.012 20:55:51 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:24.012 20:55:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:24.012 MallocBdevForConfigChangeCheck 00:05:24.012 20:55:51 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:24.012 20:55:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:24.012 20:55:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.012 20:55:51 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:24.012 20:55:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.584 20:55:51 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:24.584 INFO: shutting down applications... 00:05:24.584 20:55:51 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:24.584 20:55:51 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:24.584 20:55:51 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:24.584 20:55:51 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:24.844 Calling clear_iscsi_subsystem 00:05:24.844 Calling clear_nvmf_subsystem 00:05:24.844 Calling clear_nbd_subsystem 00:05:24.844 Calling clear_ublk_subsystem 00:05:24.844 Calling clear_vhost_blk_subsystem 00:05:24.844 Calling clear_vhost_scsi_subsystem 00:05:24.844 Calling clear_bdev_subsystem 00:05:24.844 20:55:52 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:24.844 20:55:52 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:24.844 20:55:52 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:24.844 20:55:52 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:24.844 20:55:52 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.844 20:55:52 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:25.105 20:55:52 json_config -- json_config/json_config.sh@345 -- # break 00:05:25.105 20:55:52 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:25.105 20:55:52 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:25.105 20:55:52 json_config -- json_config/common.sh@31 -- # local app=target 00:05:25.105 20:55:52 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:25.105 20:55:52 json_config -- json_config/common.sh@35 -- # [[ -n 1735098 ]] 00:05:25.105 20:55:52 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1735098 00:05:25.105 20:55:52 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:25.105 20:55:52 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.105 20:55:52 json_config -- json_config/common.sh@41 -- # kill -0 1735098 00:05:25.105 20:55:52 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:25.676 20:55:52 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:25.676 20:55:52 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.676 20:55:52 json_config -- json_config/common.sh@41 -- # kill -0 1735098 00:05:25.676 20:55:52 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:25.676 20:55:52 json_config -- json_config/common.sh@43 -- # break 00:05:25.676 20:55:52 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:25.676 20:55:52 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:25.676 SPDK target shutdown done 00:05:25.676 20:55:52 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:25.676 INFO: relaunching applications... 00:05:25.676 20:55:52 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.676 20:55:52 json_config -- json_config/common.sh@9 -- # local app=target 00:05:25.676 20:55:52 json_config -- json_config/common.sh@10 -- # shift 00:05:25.676 20:55:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:25.676 20:55:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:25.676 20:55:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:25.676 20:55:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.676 20:55:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.676 20:55:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1736006 00:05:25.676 20:55:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:25.676 Waiting for target to run... 00:05:25.676 20:55:52 json_config -- json_config/common.sh@25 -- # waitforlisten 1736006 /var/tmp/spdk_tgt.sock 00:05:25.676 20:55:52 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.676 20:55:52 json_config -- common/autotest_common.sh@829 -- # '[' -z 1736006 ']' 00:05:25.676 20:55:52 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:25.677 20:55:52 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.677 20:55:52 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:25.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:25.677 20:55:52 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.677 20:55:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.677 [2024-07-15 20:55:52.889264] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:25.677 [2024-07-15 20:55:52.889318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736006 ] 00:05:25.677 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.938 [2024-07-15 20:55:53.168183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.938 [2024-07-15 20:55:53.220415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.509 [2024-07-15 20:55:53.721502] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:26.509 [2024-07-15 20:55:53.753836] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:26.509 20:55:53 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.509 20:55:53 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:26.509 20:55:53 json_config -- json_config/common.sh@26 -- # echo '' 00:05:26.509 00:05:26.509 20:55:53 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:26.509 20:55:53 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:26.509 INFO: Checking if target configuration is the same... 00:05:26.509 20:55:53 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:26.509 20:55:53 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:26.509 20:55:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:26.769 + '[' 2 -ne 2 ']' 00:05:26.769 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:26.769 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:26.769 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:26.769 +++ basename /dev/fd/62 00:05:26.769 ++ mktemp /tmp/62.XXX 00:05:26.769 + tmp_file_1=/tmp/62.u2I 00:05:26.769 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:26.769 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:26.769 + tmp_file_2=/tmp/spdk_tgt_config.json.SQq 00:05:26.769 + ret=0 00:05:26.769 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:27.029 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:27.029 + diff -u /tmp/62.u2I /tmp/spdk_tgt_config.json.SQq 00:05:27.029 + echo 'INFO: JSON config files are the same' 00:05:27.029 INFO: JSON config files are the same 00:05:27.029 + rm /tmp/62.u2I /tmp/spdk_tgt_config.json.SQq 00:05:27.029 + exit 0 00:05:27.029 20:55:54 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:27.029 20:55:54 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:27.029 INFO: changing configuration and checking if this can be detected... 00:05:27.029 20:55:54 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:27.029 20:55:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:27.029 20:55:54 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:27.029 20:55:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:27.029 20:55:54 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:27.029 + '[' 2 -ne 2 ']' 00:05:27.029 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:27.029 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:27.029 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:27.029 +++ basename /dev/fd/62 00:05:27.289 ++ mktemp /tmp/62.XXX 00:05:27.290 + tmp_file_1=/tmp/62.uj1 00:05:27.290 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:27.290 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:27.290 + tmp_file_2=/tmp/spdk_tgt_config.json.jzO 00:05:27.290 + ret=0 00:05:27.290 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:27.550 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:27.550 + diff -u /tmp/62.uj1 /tmp/spdk_tgt_config.json.jzO 00:05:27.550 + ret=1 00:05:27.550 + echo '=== Start of file: /tmp/62.uj1 ===' 00:05:27.550 + cat /tmp/62.uj1 00:05:27.550 + echo '=== End of file: /tmp/62.uj1 ===' 00:05:27.550 + echo '' 00:05:27.550 + echo '=== Start of file: /tmp/spdk_tgt_config.json.jzO ===' 00:05:27.550 + cat /tmp/spdk_tgt_config.json.jzO 00:05:27.550 + echo '=== End of file: /tmp/spdk_tgt_config.json.jzO ===' 00:05:27.550 + echo '' 00:05:27.550 + rm /tmp/62.uj1 /tmp/spdk_tgt_config.json.jzO 00:05:27.550 + exit 1 00:05:27.550 20:55:54 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:27.550 INFO: configuration change detected. 00:05:27.550 20:55:54 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:27.550 20:55:54 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:27.550 20:55:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:27.550 20:55:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.550 20:55:54 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:27.550 20:55:54 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:27.550 20:55:54 json_config -- json_config/json_config.sh@317 -- # [[ -n 1736006 ]] 00:05:27.550 20:55:54 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:27.550 20:55:54 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:27.550 20:55:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:27.550 20:55:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.550 20:55:54 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:27.550 20:55:54 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:27.550 20:55:54 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:27.550 20:55:54 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:27.550 20:55:54 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:27.550 20:55:54 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:27.550 20:55:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:27.550 20:55:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.550 20:55:54 json_config -- json_config/json_config.sh@323 -- # killprocess 1736006 00:05:27.550 20:55:54 json_config -- common/autotest_common.sh@948 -- # '[' -z 1736006 ']' 00:05:27.550 20:55:54 json_config -- common/autotest_common.sh@952 -- # kill -0 1736006 00:05:27.550 20:55:54 json_config -- common/autotest_common.sh@953 -- # uname 00:05:27.550 20:55:54 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.550 20:55:54 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1736006 00:05:27.550 20:55:54 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.550 20:55:54 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.550 20:55:54 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1736006' 00:05:27.550 killing process with pid 1736006 00:05:27.550 20:55:54 json_config -- common/autotest_common.sh@967 -- # kill 1736006 00:05:27.550 20:55:54 json_config -- common/autotest_common.sh@972 -- # wait 1736006 00:05:27.811 20:55:55 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:27.811 20:55:55 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:27.811 20:55:55 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:27.811 20:55:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.106 20:55:55 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:28.106 20:55:55 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:28.106 INFO: Success 00:05:28.106 00:05:28.106 real 0m6.972s 00:05:28.106 user 0m8.477s 00:05:28.106 sys 0m1.696s 00:05:28.106 20:55:55 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.106 20:55:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.106 ************************************ 00:05:28.106 END TEST json_config 00:05:28.106 ************************************ 00:05:28.106 20:55:55 -- common/autotest_common.sh@1142 -- # return 0 00:05:28.106 20:55:55 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:28.106 20:55:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.106 20:55:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.106 20:55:55 -- common/autotest_common.sh@10 -- # set +x 00:05:28.106 ************************************ 00:05:28.106 START TEST json_config_extra_key 00:05:28.106 ************************************ 00:05:28.106 20:55:55 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:28.106 20:55:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:28.106 20:55:55 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.106 20:55:55 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.106 20:55:55 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.106 20:55:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.106 20:55:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.106 20:55:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.106 20:55:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:28.106 20:55:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:28.106 20:55:55 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:28.106 20:55:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:28.106 20:55:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:28.106 20:55:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:28.106 20:55:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:28.106 20:55:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:28.106 20:55:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:28.106 20:55:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:28.106 20:55:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:28.106 20:55:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:28.106 20:55:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:28.106 20:55:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:28.106 INFO: launching applications... 00:05:28.106 20:55:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:28.106 20:55:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:28.106 20:55:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:28.106 20:55:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:28.106 20:55:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:28.106 20:55:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:28.106 20:55:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.106 20:55:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.106 20:55:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1736773 00:05:28.106 20:55:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:28.106 Waiting for target to run... 00:05:28.106 20:55:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1736773 /var/tmp/spdk_tgt.sock 00:05:28.106 20:55:55 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1736773 ']' 00:05:28.106 20:55:55 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.106 20:55:55 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.106 20:55:55 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:28.106 20:55:55 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.106 20:55:55 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.106 20:55:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:28.106 [2024-07-15 20:55:55.353017] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:28.106 [2024-07-15 20:55:55.353088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736773 ] 00:05:28.449 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.449 [2024-07-15 20:55:55.651515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.449 [2024-07-15 20:55:55.704128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.021 20:55:56 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.021 20:55:56 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:29.021 20:55:56 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:29.021 00:05:29.022 20:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:29.022 INFO: shutting down applications... 00:05:29.022 20:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:29.022 20:55:56 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:29.022 20:55:56 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:29.022 20:55:56 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1736773 ]] 00:05:29.022 20:55:56 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1736773 00:05:29.022 20:55:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:29.022 20:55:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.022 20:55:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1736773 00:05:29.022 20:55:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:29.593 20:55:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:29.593 20:55:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.593 20:55:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1736773 00:05:29.594 20:55:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:29.594 20:55:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:29.594 20:55:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:29.594 20:55:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:29.594 SPDK target shutdown done 00:05:29.594 20:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:29.594 Success 00:05:29.594 00:05:29.594 real 0m1.449s 00:05:29.594 user 0m1.077s 00:05:29.594 sys 0m0.403s 00:05:29.594 20:55:56 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.594 20:55:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:29.594 ************************************ 00:05:29.594 END TEST json_config_extra_key 00:05:29.594 ************************************ 00:05:29.594 20:55:56 -- common/autotest_common.sh@1142 -- # return 0 00:05:29.594 20:55:56 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:29.594 20:55:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.594 20:55:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.594 20:55:56 -- common/autotest_common.sh@10 -- # set +x 00:05:29.594 ************************************ 00:05:29.594 START TEST alias_rpc 00:05:29.594 ************************************ 00:05:29.594 20:55:56 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:29.594 * Looking for test storage... 00:05:29.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:29.594 20:55:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:29.594 20:55:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1737136 00:05:29.594 20:55:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1737136 00:05:29.594 20:55:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.594 20:55:56 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1737136 ']' 00:05:29.594 20:55:56 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.594 20:55:56 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.594 20:55:56 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.594 20:55:56 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.594 20:55:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.594 [2024-07-15 20:55:56.862538] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:29.594 [2024-07-15 20:55:56.862612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737136 ] 00:05:29.854 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.854 [2024-07-15 20:55:56.933837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.854 [2024-07-15 20:55:57.008161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.425 20:55:57 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.426 20:55:57 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:30.426 20:55:57 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:30.686 20:55:57 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1737136 00:05:30.686 20:55:57 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1737136 ']' 00:05:30.686 20:55:57 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1737136 00:05:30.686 20:55:57 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:30.686 20:55:57 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.686 20:55:57 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1737136 00:05:30.686 20:55:57 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.686 20:55:57 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.686 20:55:57 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1737136' 00:05:30.686 killing process with pid 1737136 00:05:30.686 20:55:57 alias_rpc -- common/autotest_common.sh@967 -- # kill 1737136 00:05:30.686 20:55:57 alias_rpc -- common/autotest_common.sh@972 -- # wait 1737136 00:05:30.948 00:05:30.948 real 0m1.362s 00:05:30.948 user 0m1.483s 00:05:30.948 sys 0m0.375s 00:05:30.948 20:55:58 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.948 20:55:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.948 ************************************ 00:05:30.948 END TEST alias_rpc 00:05:30.948 ************************************ 00:05:30.948 20:55:58 -- common/autotest_common.sh@1142 -- # return 0 00:05:30.948 20:55:58 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:30.948 20:55:58 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:30.948 20:55:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.948 20:55:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.948 20:55:58 -- common/autotest_common.sh@10 -- # set +x 00:05:30.948 ************************************ 00:05:30.948 START TEST spdkcli_tcp 00:05:30.948 ************************************ 00:05:30.948 20:55:58 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:30.948 * Looking for test storage... 00:05:31.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:31.210 20:55:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:31.210 20:55:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:31.210 20:55:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:31.210 20:55:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:31.210 20:55:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:31.210 20:55:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:31.210 20:55:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:31.210 20:55:58 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:31.210 20:55:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.210 20:55:58 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1737402 00:05:31.210 20:55:58 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1737402 00:05:31.210 20:55:58 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:31.210 20:55:58 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1737402 ']' 00:05:31.210 20:55:58 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.210 20:55:58 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.210 20:55:58 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.210 20:55:58 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.210 20:55:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.210 [2024-07-15 20:55:58.310624] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:31.210 [2024-07-15 20:55:58.310700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737402 ] 00:05:31.210 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.210 [2024-07-15 20:55:58.383725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.211 [2024-07-15 20:55:58.458591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.211 [2024-07-15 20:55:58.458594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.155 20:55:59 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.155 20:55:59 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:32.155 20:55:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1737570 00:05:32.155 20:55:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:32.155 20:55:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:32.155 [ 00:05:32.155 "bdev_malloc_delete", 00:05:32.155 "bdev_malloc_create", 00:05:32.155 "bdev_null_resize", 00:05:32.155 "bdev_null_delete", 00:05:32.155 "bdev_null_create", 00:05:32.155 "bdev_nvme_cuse_unregister", 00:05:32.155 "bdev_nvme_cuse_register", 00:05:32.155 "bdev_opal_new_user", 00:05:32.155 "bdev_opal_set_lock_state", 00:05:32.155 "bdev_opal_delete", 00:05:32.155 "bdev_opal_get_info", 00:05:32.155 "bdev_opal_create", 00:05:32.155 "bdev_nvme_opal_revert", 00:05:32.155 "bdev_nvme_opal_init", 00:05:32.155 "bdev_nvme_send_cmd", 00:05:32.155 "bdev_nvme_get_path_iostat", 00:05:32.155 "bdev_nvme_get_mdns_discovery_info", 00:05:32.155 "bdev_nvme_stop_mdns_discovery", 00:05:32.155 "bdev_nvme_start_mdns_discovery", 00:05:32.155 "bdev_nvme_set_multipath_policy", 00:05:32.155 "bdev_nvme_set_preferred_path", 00:05:32.155 "bdev_nvme_get_io_paths", 00:05:32.155 "bdev_nvme_remove_error_injection", 00:05:32.155 "bdev_nvme_add_error_injection", 00:05:32.155 "bdev_nvme_get_discovery_info", 00:05:32.155 "bdev_nvme_stop_discovery", 00:05:32.155 "bdev_nvme_start_discovery", 00:05:32.155 "bdev_nvme_get_controller_health_info", 00:05:32.155 "bdev_nvme_disable_controller", 00:05:32.155 "bdev_nvme_enable_controller", 00:05:32.155 "bdev_nvme_reset_controller", 00:05:32.155 "bdev_nvme_get_transport_statistics", 00:05:32.155 "bdev_nvme_apply_firmware", 00:05:32.155 "bdev_nvme_detach_controller", 00:05:32.155 "bdev_nvme_get_controllers", 00:05:32.155 "bdev_nvme_attach_controller", 00:05:32.155 "bdev_nvme_set_hotplug", 00:05:32.155 "bdev_nvme_set_options", 00:05:32.155 "bdev_passthru_delete", 00:05:32.155 "bdev_passthru_create", 00:05:32.155 "bdev_lvol_set_parent_bdev", 00:05:32.155 "bdev_lvol_set_parent", 00:05:32.155 "bdev_lvol_check_shallow_copy", 00:05:32.155 "bdev_lvol_start_shallow_copy", 00:05:32.155 "bdev_lvol_grow_lvstore", 00:05:32.155 "bdev_lvol_get_lvols", 00:05:32.155 "bdev_lvol_get_lvstores", 00:05:32.155 "bdev_lvol_delete", 00:05:32.155 "bdev_lvol_set_read_only", 00:05:32.155 "bdev_lvol_resize", 00:05:32.156 "bdev_lvol_decouple_parent", 00:05:32.156 "bdev_lvol_inflate", 00:05:32.156 "bdev_lvol_rename", 00:05:32.156 "bdev_lvol_clone_bdev", 00:05:32.156 "bdev_lvol_clone", 00:05:32.156 "bdev_lvol_snapshot", 00:05:32.156 "bdev_lvol_create", 00:05:32.156 "bdev_lvol_delete_lvstore", 00:05:32.156 "bdev_lvol_rename_lvstore", 00:05:32.156 "bdev_lvol_create_lvstore", 00:05:32.156 "bdev_raid_set_options", 00:05:32.156 "bdev_raid_remove_base_bdev", 00:05:32.156 "bdev_raid_add_base_bdev", 00:05:32.156 "bdev_raid_delete", 00:05:32.156 "bdev_raid_create", 00:05:32.156 "bdev_raid_get_bdevs", 00:05:32.156 "bdev_error_inject_error", 00:05:32.156 "bdev_error_delete", 00:05:32.156 "bdev_error_create", 00:05:32.156 "bdev_split_delete", 00:05:32.156 "bdev_split_create", 00:05:32.156 "bdev_delay_delete", 00:05:32.156 "bdev_delay_create", 00:05:32.156 "bdev_delay_update_latency", 00:05:32.156 "bdev_zone_block_delete", 00:05:32.156 "bdev_zone_block_create", 00:05:32.156 "blobfs_create", 00:05:32.156 "blobfs_detect", 00:05:32.156 "blobfs_set_cache_size", 00:05:32.156 "bdev_aio_delete", 00:05:32.156 "bdev_aio_rescan", 00:05:32.156 "bdev_aio_create", 00:05:32.156 "bdev_ftl_set_property", 00:05:32.156 "bdev_ftl_get_properties", 00:05:32.156 "bdev_ftl_get_stats", 00:05:32.156 "bdev_ftl_unmap", 00:05:32.156 "bdev_ftl_unload", 00:05:32.156 "bdev_ftl_delete", 00:05:32.156 "bdev_ftl_load", 00:05:32.156 "bdev_ftl_create", 00:05:32.156 "bdev_virtio_attach_controller", 00:05:32.156 "bdev_virtio_scsi_get_devices", 00:05:32.156 "bdev_virtio_detach_controller", 00:05:32.156 "bdev_virtio_blk_set_hotplug", 00:05:32.156 "bdev_iscsi_delete", 00:05:32.156 "bdev_iscsi_create", 00:05:32.156 "bdev_iscsi_set_options", 00:05:32.156 "accel_error_inject_error", 00:05:32.156 "ioat_scan_accel_module", 00:05:32.156 "dsa_scan_accel_module", 00:05:32.156 "iaa_scan_accel_module", 00:05:32.156 "vfu_virtio_create_scsi_endpoint", 00:05:32.156 "vfu_virtio_scsi_remove_target", 00:05:32.156 "vfu_virtio_scsi_add_target", 00:05:32.156 "vfu_virtio_create_blk_endpoint", 00:05:32.156 "vfu_virtio_delete_endpoint", 00:05:32.156 "keyring_file_remove_key", 00:05:32.156 "keyring_file_add_key", 00:05:32.156 "keyring_linux_set_options", 00:05:32.156 "iscsi_get_histogram", 00:05:32.156 "iscsi_enable_histogram", 00:05:32.156 "iscsi_set_options", 00:05:32.156 "iscsi_get_auth_groups", 00:05:32.156 "iscsi_auth_group_remove_secret", 00:05:32.156 "iscsi_auth_group_add_secret", 00:05:32.156 "iscsi_delete_auth_group", 00:05:32.156 "iscsi_create_auth_group", 00:05:32.156 "iscsi_set_discovery_auth", 00:05:32.156 "iscsi_get_options", 00:05:32.156 "iscsi_target_node_request_logout", 00:05:32.156 "iscsi_target_node_set_redirect", 00:05:32.156 "iscsi_target_node_set_auth", 00:05:32.156 "iscsi_target_node_add_lun", 00:05:32.156 "iscsi_get_stats", 00:05:32.156 "iscsi_get_connections", 00:05:32.156 "iscsi_portal_group_set_auth", 00:05:32.156 "iscsi_start_portal_group", 00:05:32.156 "iscsi_delete_portal_group", 00:05:32.156 "iscsi_create_portal_group", 00:05:32.156 "iscsi_get_portal_groups", 00:05:32.156 "iscsi_delete_target_node", 00:05:32.156 "iscsi_target_node_remove_pg_ig_maps", 00:05:32.156 "iscsi_target_node_add_pg_ig_maps", 00:05:32.156 "iscsi_create_target_node", 00:05:32.156 "iscsi_get_target_nodes", 00:05:32.156 "iscsi_delete_initiator_group", 00:05:32.156 "iscsi_initiator_group_remove_initiators", 00:05:32.156 "iscsi_initiator_group_add_initiators", 00:05:32.156 "iscsi_create_initiator_group", 00:05:32.156 "iscsi_get_initiator_groups", 00:05:32.156 "nvmf_set_crdt", 00:05:32.156 "nvmf_set_config", 00:05:32.156 "nvmf_set_max_subsystems", 00:05:32.156 "nvmf_stop_mdns_prr", 00:05:32.156 "nvmf_publish_mdns_prr", 00:05:32.156 "nvmf_subsystem_get_listeners", 00:05:32.156 "nvmf_subsystem_get_qpairs", 00:05:32.156 "nvmf_subsystem_get_controllers", 00:05:32.156 "nvmf_get_stats", 00:05:32.156 "nvmf_get_transports", 00:05:32.156 "nvmf_create_transport", 00:05:32.156 "nvmf_get_targets", 00:05:32.156 "nvmf_delete_target", 00:05:32.156 "nvmf_create_target", 00:05:32.156 "nvmf_subsystem_allow_any_host", 00:05:32.156 "nvmf_subsystem_remove_host", 00:05:32.156 "nvmf_subsystem_add_host", 00:05:32.156 "nvmf_ns_remove_host", 00:05:32.156 "nvmf_ns_add_host", 00:05:32.156 "nvmf_subsystem_remove_ns", 00:05:32.156 "nvmf_subsystem_add_ns", 00:05:32.156 "nvmf_subsystem_listener_set_ana_state", 00:05:32.156 "nvmf_discovery_get_referrals", 00:05:32.156 "nvmf_discovery_remove_referral", 00:05:32.156 "nvmf_discovery_add_referral", 00:05:32.156 "nvmf_subsystem_remove_listener", 00:05:32.156 "nvmf_subsystem_add_listener", 00:05:32.156 "nvmf_delete_subsystem", 00:05:32.156 "nvmf_create_subsystem", 00:05:32.156 "nvmf_get_subsystems", 00:05:32.156 "env_dpdk_get_mem_stats", 00:05:32.156 "nbd_get_disks", 00:05:32.156 "nbd_stop_disk", 00:05:32.156 "nbd_start_disk", 00:05:32.156 "ublk_recover_disk", 00:05:32.156 "ublk_get_disks", 00:05:32.156 "ublk_stop_disk", 00:05:32.156 "ublk_start_disk", 00:05:32.156 "ublk_destroy_target", 00:05:32.156 "ublk_create_target", 00:05:32.156 "virtio_blk_create_transport", 00:05:32.156 "virtio_blk_get_transports", 00:05:32.156 "vhost_controller_set_coalescing", 00:05:32.156 "vhost_get_controllers", 00:05:32.156 "vhost_delete_controller", 00:05:32.156 "vhost_create_blk_controller", 00:05:32.156 "vhost_scsi_controller_remove_target", 00:05:32.156 "vhost_scsi_controller_add_target", 00:05:32.156 "vhost_start_scsi_controller", 00:05:32.156 "vhost_create_scsi_controller", 00:05:32.156 "thread_set_cpumask", 00:05:32.156 "framework_get_governor", 00:05:32.156 "framework_get_scheduler", 00:05:32.156 "framework_set_scheduler", 00:05:32.156 "framework_get_reactors", 00:05:32.156 "thread_get_io_channels", 00:05:32.156 "thread_get_pollers", 00:05:32.156 "thread_get_stats", 00:05:32.156 "framework_monitor_context_switch", 00:05:32.156 "spdk_kill_instance", 00:05:32.156 "log_enable_timestamps", 00:05:32.156 "log_get_flags", 00:05:32.156 "log_clear_flag", 00:05:32.156 "log_set_flag", 00:05:32.156 "log_get_level", 00:05:32.156 "log_set_level", 00:05:32.156 "log_get_print_level", 00:05:32.156 "log_set_print_level", 00:05:32.156 "framework_enable_cpumask_locks", 00:05:32.156 "framework_disable_cpumask_locks", 00:05:32.156 "framework_wait_init", 00:05:32.156 "framework_start_init", 00:05:32.156 "scsi_get_devices", 00:05:32.156 "bdev_get_histogram", 00:05:32.156 "bdev_enable_histogram", 00:05:32.156 "bdev_set_qos_limit", 00:05:32.156 "bdev_set_qd_sampling_period", 00:05:32.156 "bdev_get_bdevs", 00:05:32.156 "bdev_reset_iostat", 00:05:32.156 "bdev_get_iostat", 00:05:32.156 "bdev_examine", 00:05:32.156 "bdev_wait_for_examine", 00:05:32.156 "bdev_set_options", 00:05:32.156 "notify_get_notifications", 00:05:32.156 "notify_get_types", 00:05:32.156 "accel_get_stats", 00:05:32.156 "accel_set_options", 00:05:32.156 "accel_set_driver", 00:05:32.156 "accel_crypto_key_destroy", 00:05:32.156 "accel_crypto_keys_get", 00:05:32.156 "accel_crypto_key_create", 00:05:32.156 "accel_assign_opc", 00:05:32.156 "accel_get_module_info", 00:05:32.156 "accel_get_opc_assignments", 00:05:32.156 "vmd_rescan", 00:05:32.156 "vmd_remove_device", 00:05:32.156 "vmd_enable", 00:05:32.156 "sock_get_default_impl", 00:05:32.156 "sock_set_default_impl", 00:05:32.156 "sock_impl_set_options", 00:05:32.156 "sock_impl_get_options", 00:05:32.156 "iobuf_get_stats", 00:05:32.156 "iobuf_set_options", 00:05:32.157 "keyring_get_keys", 00:05:32.157 "framework_get_pci_devices", 00:05:32.157 "framework_get_config", 00:05:32.157 "framework_get_subsystems", 00:05:32.157 "vfu_tgt_set_base_path", 00:05:32.157 "trace_get_info", 00:05:32.157 "trace_get_tpoint_group_mask", 00:05:32.157 "trace_disable_tpoint_group", 00:05:32.157 "trace_enable_tpoint_group", 00:05:32.157 "trace_clear_tpoint_mask", 00:05:32.157 "trace_set_tpoint_mask", 00:05:32.157 "spdk_get_version", 00:05:32.157 "rpc_get_methods" 00:05:32.157 ] 00:05:32.157 20:55:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:32.157 20:55:59 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.157 20:55:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:32.157 20:55:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:32.157 20:55:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1737402 00:05:32.157 20:55:59 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1737402 ']' 00:05:32.157 20:55:59 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1737402 00:05:32.157 20:55:59 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:32.157 20:55:59 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.157 20:55:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1737402 00:05:32.157 20:55:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.157 20:55:59 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.157 20:55:59 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1737402' 00:05:32.157 killing process with pid 1737402 00:05:32.157 20:55:59 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1737402 00:05:32.157 20:55:59 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1737402 00:05:32.419 00:05:32.419 real 0m1.403s 00:05:32.419 user 0m2.530s 00:05:32.419 sys 0m0.443s 00:05:32.419 20:55:59 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.419 20:55:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:32.419 ************************************ 00:05:32.419 END TEST spdkcli_tcp 00:05:32.419 ************************************ 00:05:32.419 20:55:59 -- common/autotest_common.sh@1142 -- # return 0 00:05:32.419 20:55:59 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:32.419 20:55:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.419 20:55:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.419 20:55:59 -- common/autotest_common.sh@10 -- # set +x 00:05:32.419 ************************************ 00:05:32.419 START TEST dpdk_mem_utility 00:05:32.419 ************************************ 00:05:32.419 20:55:59 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:32.419 * Looking for test storage... 00:05:32.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:32.681 20:55:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:32.681 20:55:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1737680 00:05:32.681 20:55:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1737680 00:05:32.681 20:55:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.681 20:55:59 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1737680 ']' 00:05:32.681 20:55:59 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.681 20:55:59 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.681 20:55:59 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.681 20:55:59 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.681 20:55:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:32.681 [2024-07-15 20:55:59.779188] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:32.681 [2024-07-15 20:55:59.779275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737680 ] 00:05:32.681 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.681 [2024-07-15 20:55:59.853746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.681 [2024-07-15 20:55:59.929363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.626 20:56:00 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.626 20:56:00 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:33.626 20:56:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:33.626 20:56:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:33.626 20:56:00 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.626 20:56:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:33.626 { 00:05:33.626 "filename": "/tmp/spdk_mem_dump.txt" 00:05:33.626 } 00:05:33.626 20:56:00 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.626 20:56:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:33.626 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:33.626 1 heaps totaling size 814.000000 MiB 00:05:33.626 size: 814.000000 MiB heap id: 0 00:05:33.626 end heaps---------- 00:05:33.626 8 mempools totaling size 598.116089 MiB 00:05:33.626 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:33.626 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:33.626 size: 84.521057 MiB name: bdev_io_1737680 00:05:33.626 size: 51.011292 MiB name: evtpool_1737680 00:05:33.626 size: 50.003479 MiB name: msgpool_1737680 00:05:33.626 size: 21.763794 MiB name: PDU_Pool 00:05:33.626 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:33.626 size: 0.026123 MiB name: Session_Pool 00:05:33.626 end mempools------- 00:05:33.626 6 memzones totaling size 4.142822 MiB 00:05:33.626 size: 1.000366 MiB name: RG_ring_0_1737680 00:05:33.626 size: 1.000366 MiB name: RG_ring_1_1737680 00:05:33.626 size: 1.000366 MiB name: RG_ring_4_1737680 00:05:33.626 size: 1.000366 MiB name: RG_ring_5_1737680 00:05:33.626 size: 0.125366 MiB name: RG_ring_2_1737680 00:05:33.626 size: 0.015991 MiB name: RG_ring_3_1737680 00:05:33.626 end memzones------- 00:05:33.626 20:56:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:33.626 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:33.626 list of free elements. size: 12.519348 MiB 00:05:33.626 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:33.626 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:33.626 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:33.626 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:33.626 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:33.626 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:33.626 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:33.626 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:33.626 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:33.626 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:33.626 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:33.626 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:33.626 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:33.626 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:33.626 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:33.626 list of standard malloc elements. size: 199.218079 MiB 00:05:33.626 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:33.626 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:33.626 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:33.626 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:33.626 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:33.626 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:33.626 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:33.627 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:33.627 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:33.627 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:33.627 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:33.627 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:33.627 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:33.627 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:33.627 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:33.627 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:33.627 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:33.627 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:33.627 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:33.627 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:33.627 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:33.627 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:33.627 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:33.627 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:33.627 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:33.627 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:33.627 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:33.627 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:33.627 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:33.627 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:33.627 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:33.627 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:33.627 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:33.627 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:33.627 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:33.627 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:33.627 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:33.627 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:33.627 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:33.627 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:33.627 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:33.627 list of memzone associated elements. size: 602.262573 MiB 00:05:33.627 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:33.627 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:33.627 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:33.627 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:33.627 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:33.627 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1737680_0 00:05:33.627 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:33.627 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1737680_0 00:05:33.627 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:33.627 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1737680_0 00:05:33.627 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:33.627 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:33.627 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:33.627 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:33.627 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:33.627 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1737680 00:05:33.627 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:33.627 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1737680 00:05:33.627 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:33.627 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1737680 00:05:33.627 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:33.627 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:33.627 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:33.627 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:33.627 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:33.627 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:33.627 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:33.627 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:33.627 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:33.627 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1737680 00:05:33.627 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:33.627 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1737680 00:05:33.627 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:33.627 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1737680 00:05:33.627 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:33.627 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1737680 00:05:33.627 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:33.627 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1737680 00:05:33.627 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:33.627 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:33.627 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:33.627 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:33.627 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:33.627 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:33.627 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:33.627 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1737680 00:05:33.627 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:33.627 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:33.627 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:33.627 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:33.627 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:33.627 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1737680 00:05:33.627 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:33.627 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:33.627 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:33.627 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1737680 00:05:33.627 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:33.627 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1737680 00:05:33.627 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:33.627 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:33.627 20:56:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:33.627 20:56:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1737680 00:05:33.627 20:56:00 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1737680 ']' 00:05:33.627 20:56:00 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1737680 00:05:33.627 20:56:00 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:33.627 20:56:00 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:33.627 20:56:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1737680 00:05:33.627 20:56:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:33.627 20:56:00 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:33.627 20:56:00 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1737680' 00:05:33.627 killing process with pid 1737680 00:05:33.627 20:56:00 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1737680 00:05:33.627 20:56:00 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1737680 00:05:33.889 00:05:33.889 real 0m1.318s 00:05:33.889 user 0m1.397s 00:05:33.889 sys 0m0.389s 00:05:33.889 20:56:00 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.889 20:56:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:33.889 ************************************ 00:05:33.889 END TEST dpdk_mem_utility 00:05:33.889 ************************************ 00:05:33.889 20:56:00 -- common/autotest_common.sh@1142 -- # return 0 00:05:33.889 20:56:00 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:33.889 20:56:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.889 20:56:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.889 20:56:00 -- common/autotest_common.sh@10 -- # set +x 00:05:33.889 ************************************ 00:05:33.889 START TEST event 00:05:33.889 ************************************ 00:05:33.889 20:56:01 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:33.889 * Looking for test storage... 00:05:33.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:33.889 20:56:01 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:33.889 20:56:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:33.889 20:56:01 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:33.889 20:56:01 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:33.889 20:56:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.889 20:56:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.889 ************************************ 00:05:33.889 START TEST event_perf 00:05:33.889 ************************************ 00:05:33.889 20:56:01 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:33.889 Running I/O for 1 seconds...[2024-07-15 20:56:01.166573] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:33.889 [2024-07-15 20:56:01.166685] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738034 ] 00:05:34.150 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.150 [2024-07-15 20:56:01.240638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:34.150 [2024-07-15 20:56:01.318034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.150 [2024-07-15 20:56:01.318156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.150 [2024-07-15 20:56:01.318318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.150 [2024-07-15 20:56:01.318481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.089 Running I/O for 1 seconds... 00:05:35.089 lcore 0: 177288 00:05:35.089 lcore 1: 177287 00:05:35.089 lcore 2: 177285 00:05:35.089 lcore 3: 177289 00:05:35.089 done. 00:05:35.089 00:05:35.089 real 0m1.228s 00:05:35.089 user 0m4.138s 00:05:35.089 sys 0m0.086s 00:05:35.089 20:56:02 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.089 20:56:02 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.089 ************************************ 00:05:35.089 END TEST event_perf 00:05:35.089 ************************************ 00:05:35.349 20:56:02 event -- common/autotest_common.sh@1142 -- # return 0 00:05:35.349 20:56:02 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:35.349 20:56:02 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:35.349 20:56:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.349 20:56:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.349 ************************************ 00:05:35.349 START TEST event_reactor 00:05:35.349 ************************************ 00:05:35.349 20:56:02 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:35.349 [2024-07-15 20:56:02.469306] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:35.349 [2024-07-15 20:56:02.469400] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738392 ] 00:05:35.349 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.349 [2024-07-15 20:56:02.541404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.349 [2024-07-15 20:56:02.608869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.730 test_start 00:05:36.730 oneshot 00:05:36.730 tick 100 00:05:36.730 tick 100 00:05:36.730 tick 250 00:05:36.730 tick 100 00:05:36.730 tick 100 00:05:36.730 tick 100 00:05:36.730 tick 250 00:05:36.730 tick 500 00:05:36.730 tick 100 00:05:36.730 tick 100 00:05:36.730 tick 250 00:05:36.730 tick 100 00:05:36.730 tick 100 00:05:36.730 test_end 00:05:36.730 00:05:36.730 real 0m1.213s 00:05:36.730 user 0m1.129s 00:05:36.730 sys 0m0.080s 00:05:36.730 20:56:03 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.730 20:56:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:36.730 ************************************ 00:05:36.730 END TEST event_reactor 00:05:36.730 ************************************ 00:05:36.730 20:56:03 event -- common/autotest_common.sh@1142 -- # return 0 00:05:36.730 20:56:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:36.730 20:56:03 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:36.730 20:56:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.730 20:56:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.730 ************************************ 00:05:36.730 START TEST event_reactor_perf 00:05:36.730 ************************************ 00:05:36.730 20:56:03 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:36.730 [2024-07-15 20:56:03.755581] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:36.730 [2024-07-15 20:56:03.755690] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738741 ] 00:05:36.730 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.730 [2024-07-15 20:56:03.826913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.730 [2024-07-15 20:56:03.895666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.671 test_start 00:05:37.671 test_end 00:05:37.671 Performance: 367283 events per second 00:05:37.671 00:05:37.671 real 0m1.213s 00:05:37.671 user 0m1.127s 00:05:37.671 sys 0m0.082s 00:05:37.671 20:56:04 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.671 20:56:04 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.671 ************************************ 00:05:37.671 END TEST event_reactor_perf 00:05:37.671 ************************************ 00:05:37.933 20:56:04 event -- common/autotest_common.sh@1142 -- # return 0 00:05:37.933 20:56:04 event -- event/event.sh@49 -- # uname -s 00:05:37.933 20:56:04 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:37.933 20:56:04 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:37.933 20:56:04 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.933 20:56:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.933 20:56:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.933 ************************************ 00:05:37.933 START TEST event_scheduler 00:05:37.933 ************************************ 00:05:37.933 20:56:05 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:37.933 * Looking for test storage... 00:05:37.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:37.933 20:56:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:37.933 20:56:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1739025 00:05:37.933 20:56:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.933 20:56:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:37.933 20:56:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1739025 00:05:37.933 20:56:05 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1739025 ']' 00:05:37.933 20:56:05 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.933 20:56:05 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.933 20:56:05 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.933 20:56:05 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.933 20:56:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.933 [2024-07-15 20:56:05.180833] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:37.933 [2024-07-15 20:56:05.180905] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1739025 ] 00:05:37.933 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.194 [2024-07-15 20:56:05.243225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:38.194 [2024-07-15 20:56:05.309588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.194 [2024-07-15 20:56:05.309745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.194 [2024-07-15 20:56:05.309790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.194 [2024-07-15 20:56:05.309793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.767 20:56:05 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.767 20:56:05 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:38.767 20:56:05 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:38.767 20:56:05 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.767 20:56:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.767 [2024-07-15 20:56:05.972015] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:38.767 [2024-07-15 20:56:05.972029] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:38.767 [2024-07-15 20:56:05.972037] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:38.767 [2024-07-15 20:56:05.972042] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:38.767 [2024-07-15 20:56:05.972045] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:38.767 20:56:05 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.767 20:56:05 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:38.767 20:56:05 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.767 20:56:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.767 [2024-07-15 20:56:06.026650] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:38.767 20:56:06 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.767 20:56:06 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:38.767 20:56:06 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.768 20:56:06 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.768 20:56:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.029 ************************************ 00:05:39.029 START TEST scheduler_create_thread 00:05:39.029 ************************************ 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.029 2 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.029 3 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.029 4 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.029 5 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.029 6 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.029 7 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.029 8 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.029 9 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.029 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.602 10 00:05:39.602 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.602 20:56:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:39.602 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.602 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.988 20:56:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.988 20:56:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:40.988 20:56:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:40.988 20:56:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.988 20:56:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.560 20:56:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.560 20:56:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:41.560 20:56:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.560 20:56:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.503 20:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.503 20:56:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:42.503 20:56:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:42.503 20:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.503 20:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.075 20:56:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.075 00:05:43.075 real 0m4.223s 00:05:43.075 user 0m0.025s 00:05:43.075 sys 0m0.006s 00:05:43.075 20:56:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.075 20:56:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.075 ************************************ 00:05:43.075 END TEST scheduler_create_thread 00:05:43.075 ************************************ 00:05:43.075 20:56:10 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:43.075 20:56:10 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:43.075 20:56:10 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1739025 00:05:43.075 20:56:10 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1739025 ']' 00:05:43.075 20:56:10 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1739025 00:05:43.075 20:56:10 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:43.075 20:56:10 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.075 20:56:10 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1739025 00:05:43.336 20:56:10 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:43.336 20:56:10 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:43.336 20:56:10 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1739025' 00:05:43.336 killing process with pid 1739025 00:05:43.336 20:56:10 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1739025 00:05:43.336 20:56:10 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1739025 00:05:43.336 [2024-07-15 20:56:10.567891] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:43.597 00:05:43.597 real 0m5.710s 00:05:43.597 user 0m12.724s 00:05:43.597 sys 0m0.380s 00:05:43.597 20:56:10 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.597 20:56:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.597 ************************************ 00:05:43.597 END TEST event_scheduler 00:05:43.597 ************************************ 00:05:43.597 20:56:10 event -- common/autotest_common.sh@1142 -- # return 0 00:05:43.597 20:56:10 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:43.597 20:56:10 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:43.597 20:56:10 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.597 20:56:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.597 20:56:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.597 ************************************ 00:05:43.597 START TEST app_repeat 00:05:43.597 ************************************ 00:05:43.597 20:56:10 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:43.597 20:56:10 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.597 20:56:10 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.597 20:56:10 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:43.597 20:56:10 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.597 20:56:10 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:43.597 20:56:10 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:43.597 20:56:10 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:43.597 20:56:10 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1740187 00:05:43.597 20:56:10 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.597 20:56:10 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:43.597 20:56:10 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1740187' 00:05:43.597 Process app_repeat pid: 1740187 00:05:43.597 20:56:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.597 20:56:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:43.597 spdk_app_start Round 0 00:05:43.597 20:56:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1740187 /var/tmp/spdk-nbd.sock 00:05:43.597 20:56:10 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1740187 ']' 00:05:43.597 20:56:10 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.597 20:56:10 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.597 20:56:10 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.597 20:56:10 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.597 20:56:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.597 [2024-07-15 20:56:10.860365] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:43.597 [2024-07-15 20:56:10.860430] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1740187 ] 00:05:43.858 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.858 [2024-07-15 20:56:10.928997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.858 [2024-07-15 20:56:10.993621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.858 [2024-07-15 20:56:10.993625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.428 20:56:11 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.428 20:56:11 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:44.428 20:56:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.690 Malloc0 00:05:44.690 20:56:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.690 Malloc1 00:05:44.690 20:56:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.690 20:56:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.690 20:56:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.690 20:56:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.690 20:56:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.690 20:56:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.690 20:56:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.690 20:56:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.690 20:56:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.690 20:56:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.690 20:56:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.690 20:56:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.690 20:56:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:44.690 20:56:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.690 20:56:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.690 20:56:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.950 /dev/nbd0 00:05:44.950 20:56:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.950 20:56:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.950 20:56:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:44.950 20:56:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:44.950 20:56:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:44.950 20:56:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:44.950 20:56:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:44.950 20:56:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:44.950 20:56:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:44.950 20:56:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:44.950 20:56:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.950 1+0 records in 00:05:44.950 1+0 records out 00:05:44.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275486 s, 14.9 MB/s 00:05:44.950 20:56:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.950 20:56:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:44.950 20:56:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.950 20:56:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:44.950 20:56:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:44.950 20:56:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.950 20:56:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.950 20:56:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.210 /dev/nbd1 00:05:45.210 20:56:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.210 20:56:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.210 20:56:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:45.210 20:56:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:45.210 20:56:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:45.210 20:56:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:45.210 20:56:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:45.210 20:56:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:45.210 20:56:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:45.210 20:56:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:45.210 20:56:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.210 1+0 records in 00:05:45.210 1+0 records out 00:05:45.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025322 s, 16.2 MB/s 00:05:45.210 20:56:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.210 20:56:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:45.210 20:56:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.210 20:56:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:45.210 20:56:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:45.210 20:56:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.211 20:56:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.211 20:56:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.211 20:56:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.211 20:56:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.211 20:56:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.211 { 00:05:45.211 "nbd_device": "/dev/nbd0", 00:05:45.211 "bdev_name": "Malloc0" 00:05:45.211 }, 00:05:45.211 { 00:05:45.211 "nbd_device": "/dev/nbd1", 00:05:45.211 "bdev_name": "Malloc1" 00:05:45.211 } 00:05:45.211 ]' 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.472 { 00:05:45.472 "nbd_device": "/dev/nbd0", 00:05:45.472 "bdev_name": "Malloc0" 00:05:45.472 }, 00:05:45.472 { 00:05:45.472 "nbd_device": "/dev/nbd1", 00:05:45.472 "bdev_name": "Malloc1" 00:05:45.472 } 00:05:45.472 ]' 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.472 /dev/nbd1' 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.472 /dev/nbd1' 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.472 256+0 records in 00:05:45.472 256+0 records out 00:05:45.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124748 s, 84.1 MB/s 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.472 256+0 records in 00:05:45.472 256+0 records out 00:05:45.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156654 s, 66.9 MB/s 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.472 256+0 records in 00:05:45.472 256+0 records out 00:05:45.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218301 s, 48.0 MB/s 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.472 20:56:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.473 20:56:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.473 20:56:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.473 20:56:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.473 20:56:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.473 20:56:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.473 20:56:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.473 20:56:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.473 20:56:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.473 20:56:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.473 20:56:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.473 20:56:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.473 20:56:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.473 20:56:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.473 20:56:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.733 20:56:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.733 20:56:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.733 20:56:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.733 20:56:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.733 20:56:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.733 20:56:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.733 20:56:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.733 20:56:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.733 20:56:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.733 20:56:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.733 20:56:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.733 20:56:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.733 20:56:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.733 20:56:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.733 20:56:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.734 20:56:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.734 20:56:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.734 20:56:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.734 20:56:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.734 20:56:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.734 20:56:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.993 20:56:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.993 20:56:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.993 20:56:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.993 20:56:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.993 20:56:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.993 20:56:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.993 20:56:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:45.993 20:56:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.993 20:56:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.993 20:56:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.993 20:56:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.993 20:56:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.993 20:56:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.253 20:56:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.253 [2024-07-15 20:56:13.435869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.253 [2024-07-15 20:56:13.500718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.253 [2024-07-15 20:56:13.500720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.253 [2024-07-15 20:56:13.532118] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.253 [2024-07-15 20:56:13.532156] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.572 20:56:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:49.572 20:56:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:49.572 spdk_app_start Round 1 00:05:49.572 20:56:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1740187 /var/tmp/spdk-nbd.sock 00:05:49.572 20:56:16 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1740187 ']' 00:05:49.572 20:56:16 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.572 20:56:16 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.572 20:56:16 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.572 20:56:16 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.572 20:56:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.572 20:56:16 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.572 20:56:16 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:49.572 20:56:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.572 Malloc0 00:05:49.572 20:56:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.572 Malloc1 00:05:49.572 20:56:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.572 20:56:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.572 20:56:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.572 20:56:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.572 20:56:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.572 20:56:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.572 20:56:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.572 20:56:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.572 20:56:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.572 20:56:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.572 20:56:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.572 20:56:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.572 20:56:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:49.572 20:56:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.572 20:56:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.572 20:56:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.832 /dev/nbd0 00:05:49.832 20:56:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.832 20:56:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.832 20:56:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:49.832 20:56:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:49.832 20:56:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:49.832 20:56:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:49.832 20:56:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:49.832 20:56:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:49.832 20:56:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:49.832 20:56:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:49.832 20:56:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.832 1+0 records in 00:05:49.832 1+0 records out 00:05:49.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237085 s, 17.3 MB/s 00:05:49.832 20:56:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.832 20:56:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:49.832 20:56:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.832 20:56:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:49.832 20:56:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:49.832 20:56:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.832 20:56:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.832 20:56:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.090 /dev/nbd1 00:05:50.090 20:56:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.090 20:56:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.090 20:56:17 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:50.090 20:56:17 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:50.090 20:56:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:50.090 20:56:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:50.090 20:56:17 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:50.090 20:56:17 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:50.090 20:56:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:50.090 20:56:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:50.090 20:56:17 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.090 1+0 records in 00:05:50.090 1+0 records out 00:05:50.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282527 s, 14.5 MB/s 00:05:50.090 20:56:17 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.090 20:56:17 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:50.090 20:56:17 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.090 20:56:17 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:50.090 20:56:17 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:50.090 20:56:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.090 20:56:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.090 20:56:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.090 20:56:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.090 20:56:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.090 20:56:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.090 { 00:05:50.090 "nbd_device": "/dev/nbd0", 00:05:50.090 "bdev_name": "Malloc0" 00:05:50.090 }, 00:05:50.090 { 00:05:50.090 "nbd_device": "/dev/nbd1", 00:05:50.090 "bdev_name": "Malloc1" 00:05:50.090 } 00:05:50.090 ]' 00:05:50.090 20:56:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.090 { 00:05:50.090 "nbd_device": "/dev/nbd0", 00:05:50.090 "bdev_name": "Malloc0" 00:05:50.090 }, 00:05:50.090 { 00:05:50.090 "nbd_device": "/dev/nbd1", 00:05:50.090 "bdev_name": "Malloc1" 00:05:50.090 } 00:05:50.090 ]' 00:05:50.090 20:56:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.090 20:56:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.090 /dev/nbd1' 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.350 /dev/nbd1' 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.350 256+0 records in 00:05:50.350 256+0 records out 00:05:50.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116585 s, 89.9 MB/s 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.350 256+0 records in 00:05:50.350 256+0 records out 00:05:50.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162693 s, 64.5 MB/s 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.350 256+0 records in 00:05:50.350 256+0 records out 00:05:50.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172373 s, 60.8 MB/s 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.350 20:56:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.610 20:56:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.610 20:56:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.610 20:56:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.610 20:56:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.610 20:56:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.610 20:56:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.610 20:56:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.610 20:56:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.610 20:56:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.610 20:56:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.610 20:56:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.897 20:56:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.897 20:56:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.897 20:56:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.897 20:56:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.897 20:56:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.897 20:56:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.897 20:56:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.897 20:56:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.897 20:56:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.897 20:56:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.897 20:56:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.897 20:56:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.897 20:56:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.897 20:56:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.158 [2024-07-15 20:56:18.290781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.158 [2024-07-15 20:56:18.355561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.158 [2024-07-15 20:56:18.355563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.158 [2024-07-15 20:56:18.387872] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.158 [2024-07-15 20:56:18.387908] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.478 20:56:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.478 20:56:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:54.478 spdk_app_start Round 2 00:05:54.478 20:56:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1740187 /var/tmp/spdk-nbd.sock 00:05:54.478 20:56:21 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1740187 ']' 00:05:54.478 20:56:21 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.478 20:56:21 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.478 20:56:21 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.478 20:56:21 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.478 20:56:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.478 20:56:21 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.478 20:56:21 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:54.478 20:56:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.478 Malloc0 00:05:54.478 20:56:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.478 Malloc1 00:05:54.478 20:56:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.478 20:56:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.478 20:56:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.478 20:56:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.478 20:56:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.478 20:56:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.478 20:56:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.478 20:56:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.479 20:56:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.479 20:56:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.479 20:56:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.479 20:56:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.479 20:56:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.479 20:56:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.479 20:56:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.479 20:56:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.740 /dev/nbd0 00:05:54.740 20:56:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.740 20:56:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.740 1+0 records in 00:05:54.740 1+0 records out 00:05:54.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279575 s, 14.7 MB/s 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:54.740 20:56:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.740 20:56:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.740 20:56:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.740 /dev/nbd1 00:05:54.740 20:56:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.740 20:56:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.740 1+0 records in 00:05:54.740 1+0 records out 00:05:54.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259595 s, 15.8 MB/s 00:05:54.740 20:56:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.740 20:56:22 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:54.740 20:56:22 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.740 20:56:22 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:54.740 20:56:22 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:54.740 20:56:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.740 20:56:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.740 20:56:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.740 20:56:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.740 20:56:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.002 20:56:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.002 { 00:05:55.002 "nbd_device": "/dev/nbd0", 00:05:55.002 "bdev_name": "Malloc0" 00:05:55.002 }, 00:05:55.002 { 00:05:55.002 "nbd_device": "/dev/nbd1", 00:05:55.002 "bdev_name": "Malloc1" 00:05:55.002 } 00:05:55.002 ]' 00:05:55.002 20:56:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.002 { 00:05:55.002 "nbd_device": "/dev/nbd0", 00:05:55.002 "bdev_name": "Malloc0" 00:05:55.002 }, 00:05:55.002 { 00:05:55.002 "nbd_device": "/dev/nbd1", 00:05:55.002 "bdev_name": "Malloc1" 00:05:55.002 } 00:05:55.002 ]' 00:05:55.002 20:56:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.002 20:56:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.002 /dev/nbd1' 00:05:55.002 20:56:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.003 /dev/nbd1' 00:05:55.003 20:56:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.003 20:56:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.003 20:56:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.003 20:56:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.003 20:56:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.003 20:56:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.003 20:56:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.003 20:56:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.003 20:56:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.003 20:56:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.003 20:56:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.003 20:56:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.003 256+0 records in 00:05:55.003 256+0 records out 00:05:55.003 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124262 s, 84.4 MB/s 00:05:55.003 20:56:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.003 20:56:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.003 256+0 records in 00:05:55.003 256+0 records out 00:05:55.003 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157447 s, 66.6 MB/s 00:05:55.003 20:56:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.003 20:56:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.264 256+0 records in 00:05:55.264 256+0 records out 00:05:55.264 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0331145 s, 31.7 MB/s 00:05:55.264 20:56:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.264 20:56:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.264 20:56:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.264 20:56:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.264 20:56:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.264 20:56:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.264 20:56:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.264 20:56:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.264 20:56:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.264 20:56:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.264 20:56:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.264 20:56:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.264 20:56:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.264 20:56:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.264 20:56:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.264 20:56:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.264 20:56:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.264 20:56:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.265 20:56:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.265 20:56:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.265 20:56:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.265 20:56:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.265 20:56:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.265 20:56:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.265 20:56:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.265 20:56:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.265 20:56:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.265 20:56:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.265 20:56:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.525 20:56:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.525 20:56:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.525 20:56:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.525 20:56:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.525 20:56:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.525 20:56:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.525 20:56:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.525 20:56:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.525 20:56:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.525 20:56:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.525 20:56:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.525 20:56:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.525 20:56:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.525 20:56:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.786 20:56:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.786 20:56:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.786 20:56:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.786 20:56:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.786 20:56:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.786 20:56:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.786 20:56:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.786 20:56:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.786 20:56:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.786 20:56:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:55.786 20:56:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.047 [2024-07-15 20:56:23.163819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.047 [2024-07-15 20:56:23.228289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.047 [2024-07-15 20:56:23.228292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.047 [2024-07-15 20:56:23.259767] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.047 [2024-07-15 20:56:23.259805] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.343 20:56:26 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1740187 /var/tmp/spdk-nbd.sock 00:05:59.343 20:56:26 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1740187 ']' 00:05:59.343 20:56:26 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.343 20:56:26 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.343 20:56:26 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.343 20:56:26 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.343 20:56:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.343 20:56:26 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.343 20:56:26 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:59.343 20:56:26 event.app_repeat -- event/event.sh@39 -- # killprocess 1740187 00:05:59.343 20:56:26 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1740187 ']' 00:05:59.343 20:56:26 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1740187 00:05:59.343 20:56:26 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:59.343 20:56:26 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.343 20:56:26 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1740187 00:05:59.343 20:56:26 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.343 20:56:26 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.343 20:56:26 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1740187' 00:05:59.343 killing process with pid 1740187 00:05:59.343 20:56:26 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1740187 00:05:59.343 20:56:26 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1740187 00:05:59.343 spdk_app_start is called in Round 0. 00:05:59.343 Shutdown signal received, stop current app iteration 00:05:59.343 Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 reinitialization... 00:05:59.343 spdk_app_start is called in Round 1. 00:05:59.343 Shutdown signal received, stop current app iteration 00:05:59.343 Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 reinitialization... 00:05:59.343 spdk_app_start is called in Round 2. 00:05:59.343 Shutdown signal received, stop current app iteration 00:05:59.343 Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 reinitialization... 00:05:59.343 spdk_app_start is called in Round 3. 00:05:59.343 Shutdown signal received, stop current app iteration 00:05:59.343 20:56:26 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:59.343 20:56:26 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:59.343 00:05:59.343 real 0m15.530s 00:05:59.343 user 0m33.497s 00:05:59.343 sys 0m2.123s 00:05:59.344 20:56:26 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.344 20:56:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.344 ************************************ 00:05:59.344 END TEST app_repeat 00:05:59.344 ************************************ 00:05:59.344 20:56:26 event -- common/autotest_common.sh@1142 -- # return 0 00:05:59.344 20:56:26 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:59.344 20:56:26 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:59.344 20:56:26 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.344 20:56:26 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.344 20:56:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.344 ************************************ 00:05:59.344 START TEST cpu_locks 00:05:59.344 ************************************ 00:05:59.344 20:56:26 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:59.344 * Looking for test storage... 00:05:59.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:59.344 20:56:26 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:59.344 20:56:26 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:59.344 20:56:26 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:59.344 20:56:26 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:59.344 20:56:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.344 20:56:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.344 20:56:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.344 ************************************ 00:05:59.344 START TEST default_locks 00:05:59.344 ************************************ 00:05:59.344 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:59.344 20:56:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1743452 00:05:59.344 20:56:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1743452 00:05:59.344 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1743452 ']' 00:05:59.344 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.344 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.344 20:56:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.344 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.344 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.344 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.344 [2024-07-15 20:56:26.624019] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:59.344 [2024-07-15 20:56:26.624087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743452 ] 00:05:59.605 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.605 [2024-07-15 20:56:26.695416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.605 [2024-07-15 20:56:26.769515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.176 20:56:27 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.176 20:56:27 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:00.176 20:56:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1743452 00:06:00.176 20:56:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1743452 00:06:00.176 20:56:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.747 lslocks: write error 00:06:00.747 20:56:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1743452 00:06:00.747 20:56:27 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1743452 ']' 00:06:00.747 20:56:27 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1743452 00:06:00.747 20:56:27 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:00.747 20:56:27 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.747 20:56:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1743452 00:06:00.747 20:56:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.747 20:56:27 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.747 20:56:27 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1743452' 00:06:00.747 killing process with pid 1743452 00:06:00.747 20:56:27 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1743452 00:06:00.747 20:56:27 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1743452 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1743452 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1743452 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1743452 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1743452 ']' 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1743452) - No such process 00:06:01.008 ERROR: process (pid: 1743452) is no longer running 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:01.008 00:06:01.008 real 0m1.523s 00:06:01.008 user 0m1.582s 00:06:01.008 sys 0m0.527s 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.008 20:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.008 ************************************ 00:06:01.008 END TEST default_locks 00:06:01.008 ************************************ 00:06:01.008 20:56:28 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:01.008 20:56:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:01.008 20:56:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.008 20:56:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.008 20:56:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.008 ************************************ 00:06:01.008 START TEST default_locks_via_rpc 00:06:01.008 ************************************ 00:06:01.009 20:56:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:01.009 20:56:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1743807 00:06:01.009 20:56:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1743807 00:06:01.009 20:56:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.009 20:56:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1743807 ']' 00:06:01.009 20:56:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.009 20:56:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.009 20:56:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.009 20:56:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.009 20:56:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.009 [2024-07-15 20:56:28.189757] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:01.009 [2024-07-15 20:56:28.189795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743807 ] 00:06:01.009 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.009 [2024-07-15 20:56:28.246654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.269 [2024-07-15 20:56:28.310684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.840 20:56:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.840 20:56:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:01.840 20:56:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:01.840 20:56:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.840 20:56:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.840 20:56:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.840 20:56:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:01.840 20:56:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:01.840 20:56:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:01.840 20:56:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:01.840 20:56:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.840 20:56:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.840 20:56:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.840 20:56:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.840 20:56:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1743807 00:06:01.840 20:56:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1743807 00:06:01.840 20:56:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.101 20:56:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1743807 00:06:02.101 20:56:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1743807 ']' 00:06:02.101 20:56:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1743807 00:06:02.101 20:56:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:02.101 20:56:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.101 20:56:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1743807 00:06:02.101 20:56:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:02.101 20:56:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:02.101 20:56:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1743807' 00:06:02.101 killing process with pid 1743807 00:06:02.101 20:56:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1743807 00:06:02.101 20:56:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1743807 00:06:02.361 00:06:02.362 real 0m1.305s 00:06:02.362 user 0m1.422s 00:06:02.362 sys 0m0.392s 00:06:02.362 20:56:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.362 20:56:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.362 ************************************ 00:06:02.362 END TEST default_locks_via_rpc 00:06:02.362 ************************************ 00:06:02.362 20:56:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:02.362 20:56:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:02.362 20:56:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.362 20:56:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.362 20:56:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.362 ************************************ 00:06:02.362 START TEST non_locking_app_on_locked_coremask 00:06:02.362 ************************************ 00:06:02.362 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:02.362 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1744168 00:06:02.362 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1744168 /var/tmp/spdk.sock 00:06:02.362 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.362 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1744168 ']' 00:06:02.362 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.362 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.362 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.362 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.362 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.362 [2024-07-15 20:56:29.593714] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:02.362 [2024-07-15 20:56:29.593764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744168 ] 00:06:02.362 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.622 [2024-07-15 20:56:29.661812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.622 [2024-07-15 20:56:29.733616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.243 20:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.243 20:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:03.243 20:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:03.243 20:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1744379 00:06:03.243 20:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1744379 /var/tmp/spdk2.sock 00:06:03.243 20:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1744379 ']' 00:06:03.243 20:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.243 20:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.243 20:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.243 20:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.243 20:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.243 [2024-07-15 20:56:30.387623] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:03.243 [2024-07-15 20:56:30.387675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744379 ] 00:06:03.243 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.243 [2024-07-15 20:56:30.484748] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.243 [2024-07-15 20:56:30.484779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.515 [2024-07-15 20:56:30.613803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.086 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.086 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:04.086 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1744168 00:06:04.086 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1744168 00:06:04.086 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.347 lslocks: write error 00:06:04.347 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1744168 00:06:04.347 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1744168 ']' 00:06:04.347 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1744168 00:06:04.347 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:04.347 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.347 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1744168 00:06:04.347 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.347 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.347 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1744168' 00:06:04.347 killing process with pid 1744168 00:06:04.347 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1744168 00:06:04.347 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1744168 00:06:04.607 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1744379 00:06:04.607 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1744379 ']' 00:06:04.607 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1744379 00:06:04.607 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:04.607 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.607 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1744379 00:06:04.868 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.868 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.868 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1744379' 00:06:04.868 killing process with pid 1744379 00:06:04.868 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1744379 00:06:04.868 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1744379 00:06:04.868 00:06:04.868 real 0m2.600s 00:06:04.868 user 0m2.822s 00:06:04.868 sys 0m0.755s 00:06:04.868 20:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.868 20:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.868 ************************************ 00:06:04.868 END TEST non_locking_app_on_locked_coremask 00:06:04.868 ************************************ 00:06:05.129 20:56:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:05.129 20:56:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:05.129 20:56:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.129 20:56:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.129 20:56:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.129 ************************************ 00:06:05.129 START TEST locking_app_on_unlocked_coremask 00:06:05.129 ************************************ 00:06:05.129 20:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:05.129 20:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1744851 00:06:05.129 20:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1744851 /var/tmp/spdk.sock 00:06:05.129 20:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:05.129 20:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1744851 ']' 00:06:05.129 20:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.129 20:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.129 20:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.129 20:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.129 20:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.129 [2024-07-15 20:56:32.265056] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:05.129 [2024-07-15 20:56:32.265111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744851 ] 00:06:05.129 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.129 [2024-07-15 20:56:32.333825] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.129 [2024-07-15 20:56:32.333858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.129 [2024-07-15 20:56:32.407840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.068 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.068 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:06.068 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1744883 00:06:06.068 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1744883 /var/tmp/spdk2.sock 00:06:06.068 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1744883 ']' 00:06:06.068 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:06.068 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.068 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.068 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.068 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.068 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.068 [2024-07-15 20:56:33.078138] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:06.068 [2024-07-15 20:56:33.078193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744883 ] 00:06:06.068 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.068 [2024-07-15 20:56:33.177149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.068 [2024-07-15 20:56:33.306620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.636 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.636 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:06.636 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1744883 00:06:06.636 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1744883 00:06:06.636 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.577 lslocks: write error 00:06:07.577 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1744851 00:06:07.577 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1744851 ']' 00:06:07.577 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1744851 00:06:07.577 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:07.577 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.577 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1744851 00:06:07.577 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.577 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.577 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1744851' 00:06:07.577 killing process with pid 1744851 00:06:07.577 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1744851 00:06:07.577 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1744851 00:06:07.838 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1744883 00:06:07.838 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1744883 ']' 00:06:07.838 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1744883 00:06:07.838 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:07.838 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.838 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1744883 00:06:07.838 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.838 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.838 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1744883' 00:06:07.838 killing process with pid 1744883 00:06:07.838 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1744883 00:06:07.838 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1744883 00:06:08.098 00:06:08.098 real 0m3.083s 00:06:08.098 user 0m3.353s 00:06:08.098 sys 0m0.915s 00:06:08.098 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.098 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.098 ************************************ 00:06:08.098 END TEST locking_app_on_unlocked_coremask 00:06:08.098 ************************************ 00:06:08.098 20:56:35 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:08.098 20:56:35 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:08.098 20:56:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.098 20:56:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.098 20:56:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.098 ************************************ 00:06:08.098 START TEST locking_app_on_locked_coremask 00:06:08.098 ************************************ 00:06:08.098 20:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:08.098 20:56:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1745440 00:06:08.098 20:56:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1745440 /var/tmp/spdk.sock 00:06:08.098 20:56:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.098 20:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1745440 ']' 00:06:08.098 20:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.098 20:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.098 20:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.098 20:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.098 20:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.358 [2024-07-15 20:56:35.422570] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:08.358 [2024-07-15 20:56:35.422620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745440 ] 00:06:08.358 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.358 [2024-07-15 20:56:35.487538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.358 [2024-07-15 20:56:35.553070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.927 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.927 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:08.927 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1745600 00:06:08.927 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1745600 /var/tmp/spdk2.sock 00:06:08.927 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:08.927 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:08.927 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1745600 /var/tmp/spdk2.sock 00:06:08.927 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:08.927 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.927 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:08.927 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.927 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1745600 /var/tmp/spdk2.sock 00:06:08.927 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1745600 ']' 00:06:08.927 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.927 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.928 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.928 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.928 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.188 [2024-07-15 20:56:36.230095] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:09.188 [2024-07-15 20:56:36.230147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745600 ] 00:06:09.188 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.188 [2024-07-15 20:56:36.329387] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1745440 has claimed it. 00:06:09.188 [2024-07-15 20:56:36.329426] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1745600) - No such process 00:06:09.759 ERROR: process (pid: 1745600) is no longer running 00:06:09.759 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.759 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:09.759 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:09.759 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:09.759 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:09.759 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:09.759 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1745440 00:06:09.759 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1745440 00:06:09.759 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.020 lslocks: write error 00:06:10.020 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1745440 00:06:10.020 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1745440 ']' 00:06:10.020 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1745440 00:06:10.020 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:10.020 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.020 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1745440 00:06:10.282 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.282 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.282 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1745440' 00:06:10.282 killing process with pid 1745440 00:06:10.282 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1745440 00:06:10.282 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1745440 00:06:10.282 00:06:10.282 real 0m2.162s 00:06:10.282 user 0m2.372s 00:06:10.282 sys 0m0.602s 00:06:10.282 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.282 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.282 ************************************ 00:06:10.282 END TEST locking_app_on_locked_coremask 00:06:10.282 ************************************ 00:06:10.282 20:56:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:10.282 20:56:37 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:10.282 20:56:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.282 20:56:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.282 20:56:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.543 ************************************ 00:06:10.543 START TEST locking_overlapped_coremask 00:06:10.543 ************************************ 00:06:10.543 20:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:10.543 20:56:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1745958 00:06:10.543 20:56:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1745958 /var/tmp/spdk.sock 00:06:10.543 20:56:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:10.543 20:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1745958 ']' 00:06:10.543 20:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.543 20:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.543 20:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.543 20:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.543 20:56:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.543 [2024-07-15 20:56:37.653739] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:10.543 [2024-07-15 20:56:37.653792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745958 ] 00:06:10.543 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.543 [2024-07-15 20:56:37.720534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.543 [2024-07-15 20:56:37.786500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.543 [2024-07-15 20:56:37.786614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.543 [2024-07-15 20:56:37.786617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.495 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.495 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:11.495 20:56:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:11.495 20:56:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1745997 00:06:11.495 20:56:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1745997 /var/tmp/spdk2.sock 00:06:11.495 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:11.495 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1745997 /var/tmp/spdk2.sock 00:06:11.495 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:11.495 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.495 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:11.495 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.495 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1745997 /var/tmp/spdk2.sock 00:06:11.495 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1745997 ']' 00:06:11.495 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.495 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.495 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.495 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.495 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.495 [2024-07-15 20:56:38.467915] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:11.495 [2024-07-15 20:56:38.467966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745997 ] 00:06:11.495 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.495 [2024-07-15 20:56:38.549687] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1745958 has claimed it. 00:06:11.495 [2024-07-15 20:56:38.549722] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:12.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1745997) - No such process 00:06:12.066 ERROR: process (pid: 1745997) is no longer running 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1745958 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1745958 ']' 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1745958 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1745958 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1745958' 00:06:12.066 killing process with pid 1745958 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1745958 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1745958 00:06:12.066 00:06:12.066 real 0m1.749s 00:06:12.066 user 0m4.955s 00:06:12.066 sys 0m0.353s 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.066 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.066 ************************************ 00:06:12.066 END TEST locking_overlapped_coremask 00:06:12.066 ************************************ 00:06:12.328 20:56:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:12.328 20:56:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:12.328 20:56:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.328 20:56:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.328 20:56:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.328 ************************************ 00:06:12.328 START TEST locking_overlapped_coremask_via_rpc 00:06:12.328 ************************************ 00:06:12.328 20:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:12.328 20:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1746334 00:06:12.328 20:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1746334 /var/tmp/spdk.sock 00:06:12.328 20:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:12.328 20:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1746334 ']' 00:06:12.328 20:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.328 20:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.328 20:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.328 20:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.328 20:56:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.328 [2024-07-15 20:56:39.479308] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:12.328 [2024-07-15 20:56:39.479357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746334 ] 00:06:12.328 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.328 [2024-07-15 20:56:39.545917] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.328 [2024-07-15 20:56:39.545945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.328 [2024-07-15 20:56:39.611285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.328 [2024-07-15 20:56:39.611558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.328 [2024-07-15 20:56:39.611561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.270 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.270 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:13.270 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1746442 00:06:13.270 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1746442 /var/tmp/spdk2.sock 00:06:13.270 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1746442 ']' 00:06:13.270 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:13.270 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.270 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.270 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.270 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.270 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.270 [2024-07-15 20:56:40.303806] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:13.270 [2024-07-15 20:56:40.303862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746442 ] 00:06:13.270 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.270 [2024-07-15 20:56:40.385544] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.270 [2024-07-15 20:56:40.385566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.270 [2024-07-15 20:56:40.490879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.270 [2024-07-15 20:56:40.494350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.270 [2024-07-15 20:56:40.494353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.843 [2024-07-15 20:56:41.084296] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1746334 has claimed it. 00:06:13.843 request: 00:06:13.843 { 00:06:13.843 "method": "framework_enable_cpumask_locks", 00:06:13.843 "req_id": 1 00:06:13.843 } 00:06:13.843 Got JSON-RPC error response 00:06:13.843 response: 00:06:13.843 { 00:06:13.843 "code": -32603, 00:06:13.843 "message": "Failed to claim CPU core: 2" 00:06:13.843 } 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1746334 /var/tmp/spdk.sock 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1746334 ']' 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.843 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.104 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.104 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:14.105 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1746442 /var/tmp/spdk2.sock 00:06:14.105 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1746442 ']' 00:06:14.105 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.105 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.105 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.105 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.105 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.365 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.365 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:14.365 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:14.365 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:14.365 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:14.365 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:14.365 00:06:14.365 real 0m2.003s 00:06:14.365 user 0m0.776s 00:06:14.365 sys 0m0.150s 00:06:14.365 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.365 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.365 ************************************ 00:06:14.365 END TEST locking_overlapped_coremask_via_rpc 00:06:14.365 ************************************ 00:06:14.366 20:56:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:14.366 20:56:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:14.366 20:56:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1746334 ]] 00:06:14.366 20:56:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1746334 00:06:14.366 20:56:41 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1746334 ']' 00:06:14.366 20:56:41 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1746334 00:06:14.366 20:56:41 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:14.366 20:56:41 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.366 20:56:41 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1746334 00:06:14.366 20:56:41 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.366 20:56:41 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.366 20:56:41 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1746334' 00:06:14.366 killing process with pid 1746334 00:06:14.366 20:56:41 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1746334 00:06:14.366 20:56:41 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1746334 00:06:14.627 20:56:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1746442 ]] 00:06:14.627 20:56:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1746442 00:06:14.627 20:56:41 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1746442 ']' 00:06:14.627 20:56:41 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1746442 00:06:14.627 20:56:41 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:14.627 20:56:41 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.627 20:56:41 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1746442 00:06:14.627 20:56:41 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:14.627 20:56:41 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:14.627 20:56:41 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1746442' 00:06:14.627 killing process with pid 1746442 00:06:14.627 20:56:41 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1746442 00:06:14.627 20:56:41 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1746442 00:06:14.888 20:56:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.888 20:56:41 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:14.888 20:56:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1746334 ]] 00:06:14.888 20:56:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1746334 00:06:14.888 20:56:41 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1746334 ']' 00:06:14.888 20:56:41 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1746334 00:06:14.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1746334) - No such process 00:06:14.888 20:56:41 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1746334 is not found' 00:06:14.888 Process with pid 1746334 is not found 00:06:14.888 20:56:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1746442 ]] 00:06:14.888 20:56:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1746442 00:06:14.888 20:56:41 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1746442 ']' 00:06:14.888 20:56:41 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1746442 00:06:14.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1746442) - No such process 00:06:14.888 20:56:41 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1746442 is not found' 00:06:14.888 Process with pid 1746442 is not found 00:06:14.888 20:56:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.888 00:06:14.888 real 0m15.565s 00:06:14.888 user 0m26.830s 00:06:14.888 sys 0m4.559s 00:06:14.888 20:56:41 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.888 20:56:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.888 ************************************ 00:06:14.888 END TEST cpu_locks 00:06:14.888 ************************************ 00:06:14.888 20:56:42 event -- common/autotest_common.sh@1142 -- # return 0 00:06:14.888 00:06:14.888 real 0m41.022s 00:06:14.888 user 1m19.654s 00:06:14.888 sys 0m7.695s 00:06:14.888 20:56:42 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.888 20:56:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.888 ************************************ 00:06:14.888 END TEST event 00:06:14.888 ************************************ 00:06:14.888 20:56:42 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.888 20:56:42 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:14.888 20:56:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.888 20:56:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.888 20:56:42 -- common/autotest_common.sh@10 -- # set +x 00:06:14.888 ************************************ 00:06:14.888 START TEST thread 00:06:14.888 ************************************ 00:06:14.888 20:56:42 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:15.149 * Looking for test storage... 00:06:15.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:15.149 20:56:42 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:15.149 20:56:42 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:15.149 20:56:42 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.149 20:56:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.149 ************************************ 00:06:15.149 START TEST thread_poller_perf 00:06:15.149 ************************************ 00:06:15.149 20:56:42 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:15.149 [2024-07-15 20:56:42.258766] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:15.149 [2024-07-15 20:56:42.258878] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747014 ] 00:06:15.149 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.149 [2024-07-15 20:56:42.331699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.149 [2024-07-15 20:56:42.406212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.149 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:16.534 ====================================== 00:06:16.534 busy:2407268822 (cyc) 00:06:16.534 total_run_count: 287000 00:06:16.534 tsc_hz: 2400000000 (cyc) 00:06:16.534 ====================================== 00:06:16.534 poller_cost: 8387 (cyc), 3494 (nsec) 00:06:16.534 00:06:16.534 real 0m1.229s 00:06:16.534 user 0m1.142s 00:06:16.534 sys 0m0.082s 00:06:16.534 20:56:43 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.534 20:56:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.535 ************************************ 00:06:16.535 END TEST thread_poller_perf 00:06:16.535 ************************************ 00:06:16.535 20:56:43 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:16.535 20:56:43 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.535 20:56:43 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:16.535 20:56:43 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.535 20:56:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.535 ************************************ 00:06:16.535 START TEST thread_poller_perf 00:06:16.535 ************************************ 00:06:16.535 20:56:43 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.535 [2024-07-15 20:56:43.548635] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:16.535 [2024-07-15 20:56:43.548734] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747155 ] 00:06:16.535 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.535 [2024-07-15 20:56:43.619637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.535 [2024-07-15 20:56:43.686366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.535 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:17.477 ====================================== 00:06:17.477 busy:2401901646 (cyc) 00:06:17.477 total_run_count: 3764000 00:06:17.477 tsc_hz: 2400000000 (cyc) 00:06:17.477 ====================================== 00:06:17.477 poller_cost: 638 (cyc), 265 (nsec) 00:06:17.477 00:06:17.477 real 0m1.213s 00:06:17.477 user 0m1.134s 00:06:17.477 sys 0m0.075s 00:06:17.477 20:56:44 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.477 20:56:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.477 ************************************ 00:06:17.477 END TEST thread_poller_perf 00:06:17.477 ************************************ 00:06:17.739 20:56:44 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:17.739 20:56:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:17.739 00:06:17.739 real 0m2.672s 00:06:17.739 user 0m2.366s 00:06:17.739 sys 0m0.313s 00:06:17.739 20:56:44 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.739 20:56:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.739 ************************************ 00:06:17.739 END TEST thread 00:06:17.739 ************************************ 00:06:17.739 20:56:44 -- common/autotest_common.sh@1142 -- # return 0 00:06:17.739 20:56:44 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:17.739 20:56:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.739 20:56:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.739 20:56:44 -- common/autotest_common.sh@10 -- # set +x 00:06:17.739 ************************************ 00:06:17.739 START TEST accel 00:06:17.739 ************************************ 00:06:17.739 20:56:44 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:17.739 * Looking for test storage... 00:06:17.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:17.739 20:56:44 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:17.739 20:56:44 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:17.739 20:56:44 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:17.739 20:56:44 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1747530 00:06:17.739 20:56:44 accel -- accel/accel.sh@63 -- # waitforlisten 1747530 00:06:17.739 20:56:44 accel -- common/autotest_common.sh@829 -- # '[' -z 1747530 ']' 00:06:17.739 20:56:44 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.739 20:56:44 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.739 20:56:44 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:17.739 20:56:44 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.739 20:56:44 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.739 20:56:44 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:17.739 20:56:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.739 20:56:44 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.739 20:56:44 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.739 20:56:44 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.739 20:56:44 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.739 20:56:44 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.739 20:56:44 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:17.739 20:56:44 accel -- accel/accel.sh@41 -- # jq -r . 00:06:17.739 [2024-07-15 20:56:45.004514] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:17.739 [2024-07-15 20:56:45.004562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747530 ] 00:06:18.000 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.000 [2024-07-15 20:56:45.073183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.000 [2024-07-15 20:56:45.137466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.573 20:56:45 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.573 20:56:45 accel -- common/autotest_common.sh@862 -- # return 0 00:06:18.573 20:56:45 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:18.573 20:56:45 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:18.573 20:56:45 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:18.573 20:56:45 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:18.573 20:56:45 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:18.573 20:56:45 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:18.573 20:56:45 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:18.573 20:56:45 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.573 20:56:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.573 20:56:45 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.573 20:56:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # IFS== 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:18.573 20:56:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.573 20:56:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # IFS== 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:18.573 20:56:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.573 20:56:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # IFS== 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:18.573 20:56:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.573 20:56:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # IFS== 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:18.573 20:56:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.573 20:56:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # IFS== 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:18.573 20:56:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.573 20:56:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # IFS== 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:18.573 20:56:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.573 20:56:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # IFS== 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:18.573 20:56:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.573 20:56:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # IFS== 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:18.573 20:56:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.573 20:56:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # IFS== 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:18.573 20:56:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.573 20:56:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # IFS== 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:18.573 20:56:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.573 20:56:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # IFS== 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:18.573 20:56:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.573 20:56:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # IFS== 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:18.573 20:56:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.573 20:56:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # IFS== 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:18.573 20:56:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.573 20:56:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # IFS== 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:18.573 20:56:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.573 20:56:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # IFS== 00:06:18.573 20:56:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:18.573 20:56:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.573 20:56:45 accel -- accel/accel.sh@75 -- # killprocess 1747530 00:06:18.573 20:56:45 accel -- common/autotest_common.sh@948 -- # '[' -z 1747530 ']' 00:06:18.573 20:56:45 accel -- common/autotest_common.sh@952 -- # kill -0 1747530 00:06:18.573 20:56:45 accel -- common/autotest_common.sh@953 -- # uname 00:06:18.573 20:56:45 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.573 20:56:45 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1747530 00:06:18.835 20:56:45 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.835 20:56:45 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.835 20:56:45 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1747530' 00:06:18.835 killing process with pid 1747530 00:06:18.835 20:56:45 accel -- common/autotest_common.sh@967 -- # kill 1747530 00:06:18.835 20:56:45 accel -- common/autotest_common.sh@972 -- # wait 1747530 00:06:18.835 20:56:46 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:18.835 20:56:46 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:18.835 20:56:46 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:18.835 20:56:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.835 20:56:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.835 20:56:46 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:18.835 20:56:46 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:18.835 20:56:46 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:18.835 20:56:46 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.835 20:56:46 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.835 20:56:46 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.835 20:56:46 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.835 20:56:46 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.835 20:56:46 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:18.835 20:56:46 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:19.096 20:56:46 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.096 20:56:46 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:19.096 20:56:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.096 20:56:46 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:19.096 20:56:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:19.096 20:56:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.096 20:56:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.096 ************************************ 00:06:19.096 START TEST accel_missing_filename 00:06:19.096 ************************************ 00:06:19.096 20:56:46 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:19.096 20:56:46 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:19.096 20:56:46 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:19.096 20:56:46 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:19.096 20:56:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.096 20:56:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:19.096 20:56:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.096 20:56:46 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:19.096 20:56:46 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:19.096 20:56:46 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:19.096 20:56:46 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.096 20:56:46 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.096 20:56:46 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.096 20:56:46 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.096 20:56:46 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.096 20:56:46 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:19.096 20:56:46 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:19.096 [2024-07-15 20:56:46.246282] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:19.096 [2024-07-15 20:56:46.246367] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747896 ] 00:06:19.096 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.096 [2024-07-15 20:56:46.314727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.096 [2024-07-15 20:56:46.380237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.357 [2024-07-15 20:56:46.412011] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.357 [2024-07-15 20:56:46.448992] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:19.357 A filename is required. 00:06:19.357 20:56:46 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:19.357 20:56:46 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.358 20:56:46 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:19.358 20:56:46 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:19.358 20:56:46 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:19.358 20:56:46 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.358 00:06:19.358 real 0m0.286s 00:06:19.358 user 0m0.214s 00:06:19.358 sys 0m0.112s 00:06:19.358 20:56:46 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.358 20:56:46 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:19.358 ************************************ 00:06:19.358 END TEST accel_missing_filename 00:06:19.358 ************************************ 00:06:19.358 20:56:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.358 20:56:46 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.358 20:56:46 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:19.358 20:56:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.358 20:56:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.358 ************************************ 00:06:19.358 START TEST accel_compress_verify 00:06:19.358 ************************************ 00:06:19.358 20:56:46 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.358 20:56:46 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:19.358 20:56:46 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.358 20:56:46 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:19.358 20:56:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.358 20:56:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:19.358 20:56:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.358 20:56:46 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.358 20:56:46 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.358 20:56:46 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:19.358 20:56:46 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.358 20:56:46 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.358 20:56:46 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.358 20:56:46 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.358 20:56:46 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.358 20:56:46 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:19.358 20:56:46 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:19.358 [2024-07-15 20:56:46.608542] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:19.358 [2024-07-15 20:56:46.608644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747923 ] 00:06:19.358 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.619 [2024-07-15 20:56:46.677710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.619 [2024-07-15 20:56:46.743596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.619 [2024-07-15 20:56:46.775393] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.619 [2024-07-15 20:56:46.812609] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:19.619 00:06:19.619 Compression does not support the verify option, aborting. 00:06:19.619 20:56:46 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:19.619 20:56:46 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.619 20:56:46 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:19.619 20:56:46 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:19.619 20:56:46 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:19.619 20:56:46 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.619 00:06:19.619 real 0m0.289s 00:06:19.619 user 0m0.220s 00:06:19.619 sys 0m0.112s 00:06:19.619 20:56:46 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.619 20:56:46 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:19.619 ************************************ 00:06:19.619 END TEST accel_compress_verify 00:06:19.619 ************************************ 00:06:19.619 20:56:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.619 20:56:46 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:19.619 20:56:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:19.619 20:56:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.619 20:56:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.882 ************************************ 00:06:19.882 START TEST accel_wrong_workload 00:06:19.882 ************************************ 00:06:19.882 20:56:46 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:19.882 20:56:46 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:19.882 20:56:46 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:19.882 20:56:46 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:19.882 20:56:46 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.882 20:56:46 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:19.882 20:56:46 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.882 20:56:46 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:19.882 20:56:46 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:19.882 20:56:46 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:19.882 20:56:46 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.882 20:56:46 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.882 20:56:46 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.882 20:56:46 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.882 20:56:46 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.882 20:56:46 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:19.882 20:56:46 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:19.882 Unsupported workload type: foobar 00:06:19.882 [2024-07-15 20:56:46.972160] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:19.882 accel_perf options: 00:06:19.882 [-h help message] 00:06:19.882 [-q queue depth per core] 00:06:19.882 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:19.882 [-T number of threads per core 00:06:19.882 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:19.882 [-t time in seconds] 00:06:19.882 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:19.882 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:19.882 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:19.882 [-l for compress/decompress workloads, name of uncompressed input file 00:06:19.882 [-S for crc32c workload, use this seed value (default 0) 00:06:19.882 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:19.882 [-f for fill workload, use this BYTE value (default 255) 00:06:19.882 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:19.882 [-y verify result if this switch is on] 00:06:19.882 [-a tasks to allocate per core (default: same value as -q)] 00:06:19.882 Can be used to spread operations across a wider range of memory. 00:06:19.882 20:56:46 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:19.882 20:56:46 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.882 20:56:46 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:19.882 20:56:46 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.882 00:06:19.882 real 0m0.037s 00:06:19.882 user 0m0.023s 00:06:19.882 sys 0m0.014s 00:06:19.882 20:56:46 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.882 20:56:46 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:19.882 ************************************ 00:06:19.882 END TEST accel_wrong_workload 00:06:19.882 ************************************ 00:06:19.882 Error: writing output failed: Broken pipe 00:06:19.882 20:56:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.882 20:56:47 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:19.882 20:56:47 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:19.882 20:56:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.882 20:56:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.882 ************************************ 00:06:19.882 START TEST accel_negative_buffers 00:06:19.882 ************************************ 00:06:19.882 20:56:47 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:19.882 20:56:47 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:19.882 20:56:47 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:19.882 20:56:47 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:19.882 20:56:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.882 20:56:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:19.882 20:56:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.882 20:56:47 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:19.882 20:56:47 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:19.882 20:56:47 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:19.882 20:56:47 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.882 20:56:47 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.882 20:56:47 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.882 20:56:47 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.882 20:56:47 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.882 20:56:47 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:19.882 20:56:47 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:19.882 -x option must be non-negative. 00:06:19.882 [2024-07-15 20:56:47.082423] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:19.882 accel_perf options: 00:06:19.882 [-h help message] 00:06:19.882 [-q queue depth per core] 00:06:19.882 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:19.882 [-T number of threads per core 00:06:19.882 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:19.882 [-t time in seconds] 00:06:19.882 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:19.882 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:19.882 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:19.882 [-l for compress/decompress workloads, name of uncompressed input file 00:06:19.882 [-S for crc32c workload, use this seed value (default 0) 00:06:19.882 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:19.882 [-f for fill workload, use this BYTE value (default 255) 00:06:19.882 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:19.882 [-y verify result if this switch is on] 00:06:19.882 [-a tasks to allocate per core (default: same value as -q)] 00:06:19.882 Can be used to spread operations across a wider range of memory. 00:06:19.882 20:56:47 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:19.882 20:56:47 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.882 20:56:47 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:19.882 20:56:47 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.882 00:06:19.882 real 0m0.037s 00:06:19.882 user 0m0.020s 00:06:19.882 sys 0m0.017s 00:06:19.882 20:56:47 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.882 20:56:47 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:19.882 ************************************ 00:06:19.882 END TEST accel_negative_buffers 00:06:19.882 ************************************ 00:06:19.882 Error: writing output failed: Broken pipe 00:06:19.882 20:56:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.882 20:56:47 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:19.882 20:56:47 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:19.882 20:56:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.882 20:56:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.882 ************************************ 00:06:19.882 START TEST accel_crc32c 00:06:19.882 ************************************ 00:06:19.882 20:56:47 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:19.882 20:56:47 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:19.882 20:56:47 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:19.883 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.883 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.883 20:56:47 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:19.883 20:56:47 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:19.883 20:56:47 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:20.145 [2024-07-15 20:56:47.192793] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:20.145 [2024-07-15 20:56:47.192866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748120 ] 00:06:20.145 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.145 [2024-07-15 20:56:47.263996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.145 [2024-07-15 20:56:47.337274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.145 20:56:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:21.532 20:56:48 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.532 00:06:21.532 real 0m1.301s 00:06:21.532 user 0m1.200s 00:06:21.532 sys 0m0.112s 00:06:21.532 20:56:48 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.532 20:56:48 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:21.532 ************************************ 00:06:21.532 END TEST accel_crc32c 00:06:21.532 ************************************ 00:06:21.532 20:56:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.532 20:56:48 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:21.533 20:56:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:21.533 20:56:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.533 20:56:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.533 ************************************ 00:06:21.533 START TEST accel_crc32c_C2 00:06:21.533 ************************************ 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:21.533 [2024-07-15 20:56:48.569923] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:21.533 [2024-07-15 20:56:48.569988] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748345 ] 00:06:21.533 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.533 [2024-07-15 20:56:48.651740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.533 [2024-07-15 20:56:48.725119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.533 20:56:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.921 00:06:22.921 real 0m1.313s 00:06:22.921 user 0m1.207s 00:06:22.921 sys 0m0.116s 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.921 20:56:49 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:22.921 ************************************ 00:06:22.921 END TEST accel_crc32c_C2 00:06:22.921 ************************************ 00:06:22.921 20:56:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.921 20:56:49 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:22.921 20:56:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:22.921 20:56:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.921 20:56:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.921 ************************************ 00:06:22.921 START TEST accel_copy 00:06:22.921 ************************************ 00:06:22.921 20:56:49 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:22.921 20:56:49 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:22.922 20:56:49 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:22.922 20:56:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.922 20:56:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.922 20:56:49 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:22.922 20:56:49 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:22.922 20:56:49 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:22.922 20:56:49 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.922 20:56:49 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.922 20:56:49 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.922 20:56:49 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.922 20:56:49 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.922 20:56:49 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:22.922 20:56:49 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:22.922 [2024-07-15 20:56:49.958971] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:22.922 [2024-07-15 20:56:49.959031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748694 ] 00:06:22.922 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.922 [2024-07-15 20:56:50.028014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.922 [2024-07-15 20:56:50.098122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.922 20:56:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:24.307 20:56:51 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.307 00:06:24.307 real 0m1.299s 00:06:24.307 user 0m1.193s 00:06:24.307 sys 0m0.116s 00:06:24.307 20:56:51 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.307 20:56:51 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:24.307 ************************************ 00:06:24.307 END TEST accel_copy 00:06:24.307 ************************************ 00:06:24.307 20:56:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.307 20:56:51 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.307 20:56:51 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:24.307 20:56:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.307 20:56:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.307 ************************************ 00:06:24.307 START TEST accel_fill 00:06:24.307 ************************************ 00:06:24.307 20:56:51 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.307 20:56:51 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:24.307 20:56:51 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:24.307 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.307 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.307 20:56:51 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.307 20:56:51 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.307 20:56:51 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:24.307 20:56:51 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.307 20:56:51 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.307 20:56:51 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.307 20:56:51 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.307 20:56:51 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.307 20:56:51 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:24.307 20:56:51 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:24.307 [2024-07-15 20:56:51.337293] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:24.307 [2024-07-15 20:56:51.337413] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749044 ] 00:06:24.307 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.307 [2024-07-15 20:56:51.409734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.307 [2024-07-15 20:56:51.476521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.307 20:56:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.307 20:56:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.307 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.307 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.307 20:56:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.308 20:56:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:25.693 20:56:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.694 20:56:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:25.694 20:56:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.694 00:06:25.694 real 0m1.302s 00:06:25.694 user 0m1.201s 00:06:25.694 sys 0m0.113s 00:06:25.694 20:56:52 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.694 20:56:52 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:25.694 ************************************ 00:06:25.694 END TEST accel_fill 00:06:25.694 ************************************ 00:06:25.694 20:56:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.694 20:56:52 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:25.694 20:56:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:25.694 20:56:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.694 20:56:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.694 ************************************ 00:06:25.694 START TEST accel_copy_crc32c 00:06:25.694 ************************************ 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:25.694 [2024-07-15 20:56:52.710180] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:25.694 [2024-07-15 20:56:52.710282] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749397 ] 00:06:25.694 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.694 [2024-07-15 20:56:52.778679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.694 [2024-07-15 20:56:52.844158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.694 20:56:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.075 00:06:27.075 real 0m1.293s 00:06:27.075 user 0m1.206s 00:06:27.075 sys 0m0.100s 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.075 20:56:53 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:27.075 ************************************ 00:06:27.075 END TEST accel_copy_crc32c 00:06:27.075 ************************************ 00:06:27.075 20:56:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.075 20:56:54 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:27.075 20:56:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:27.075 20:56:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.075 20:56:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.075 ************************************ 00:06:27.075 START TEST accel_copy_crc32c_C2 00:06:27.075 ************************************ 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:27.075 [2024-07-15 20:56:54.076585] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:27.075 [2024-07-15 20:56:54.076679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749620 ] 00:06:27.075 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.075 [2024-07-15 20:56:54.144331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.075 [2024-07-15 20:56:54.211636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.075 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.076 20:56:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.460 00:06:28.460 real 0m1.294s 00:06:28.460 user 0m1.200s 00:06:28.460 sys 0m0.107s 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.460 20:56:55 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:28.460 ************************************ 00:06:28.460 END TEST accel_copy_crc32c_C2 00:06:28.460 ************************************ 00:06:28.460 20:56:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:28.460 20:56:55 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:28.460 20:56:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:28.460 20:56:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.460 20:56:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.460 ************************************ 00:06:28.460 START TEST accel_dualcast 00:06:28.460 ************************************ 00:06:28.460 20:56:55 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:28.460 20:56:55 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:28.460 20:56:55 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:28.460 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.460 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.460 20:56:55 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:28.460 20:56:55 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:28.461 [2024-07-15 20:56:55.447457] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:28.461 [2024-07-15 20:56:55.447566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749815 ] 00:06:28.461 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.461 [2024-07-15 20:56:55.524770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.461 [2024-07-15 20:56:55.593364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.461 20:56:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:29.843 20:56:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.843 00:06:29.843 real 0m1.305s 00:06:29.843 user 0m1.204s 00:06:29.843 sys 0m0.113s 00:06:29.843 20:56:56 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.843 20:56:56 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:29.843 ************************************ 00:06:29.843 END TEST accel_dualcast 00:06:29.843 ************************************ 00:06:29.843 20:56:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.843 20:56:56 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:29.843 20:56:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:29.843 20:56:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.843 20:56:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.843 ************************************ 00:06:29.843 START TEST accel_compare 00:06:29.843 ************************************ 00:06:29.843 20:56:56 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:29.843 [2024-07-15 20:56:56.825665] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:29.843 [2024-07-15 20:56:56.825729] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750138 ] 00:06:29.843 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.843 [2024-07-15 20:56:56.893088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.843 [2024-07-15 20:56:56.959835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:57 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:29.843 20:56:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:57 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.843 20:56:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:57 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:29.843 20:56:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.843 20:56:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.843 20:56:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.843 20:56:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.843 20:56:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.843 20:56:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:31.224 20:56:58 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.224 00:06:31.224 real 0m1.291s 00:06:31.224 user 0m1.196s 00:06:31.224 sys 0m0.106s 00:06:31.224 20:56:58 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.225 20:56:58 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:31.225 ************************************ 00:06:31.225 END TEST accel_compare 00:06:31.225 ************************************ 00:06:31.225 20:56:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.225 20:56:58 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:31.225 20:56:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:31.225 20:56:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.225 20:56:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.225 ************************************ 00:06:31.225 START TEST accel_xor 00:06:31.225 ************************************ 00:06:31.225 20:56:58 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:31.225 [2024-07-15 20:56:58.191847] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:31.225 [2024-07-15 20:56:58.191912] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750485 ] 00:06:31.225 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.225 [2024-07-15 20:56:58.260358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.225 [2024-07-15 20:56:58.327893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.225 20:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.166 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.166 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.166 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.166 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.166 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.166 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.166 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.166 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.166 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.166 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.166 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.166 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.166 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.166 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.166 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.166 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.167 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.428 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.428 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.428 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.428 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.428 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.428 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.428 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.429 20:56:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.429 20:56:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:32.429 20:56:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.429 00:06:32.429 real 0m1.293s 00:06:32.429 user 0m1.195s 00:06:32.429 sys 0m0.110s 00:06:32.429 20:56:59 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.429 20:56:59 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:32.429 ************************************ 00:06:32.429 END TEST accel_xor 00:06:32.429 ************************************ 00:06:32.429 20:56:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.429 20:56:59 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:32.429 20:56:59 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:32.429 20:56:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.429 20:56:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.429 ************************************ 00:06:32.429 START TEST accel_xor 00:06:32.429 ************************************ 00:06:32.429 20:56:59 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:32.429 20:56:59 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:32.429 20:56:59 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:32.429 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.429 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.429 20:56:59 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:32.429 20:56:59 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:32.429 20:56:59 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:32.429 20:56:59 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.429 20:56:59 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.429 20:56:59 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.429 20:56:59 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.429 20:56:59 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.429 20:56:59 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:32.429 20:56:59 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:32.429 [2024-07-15 20:56:59.560093] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:32.429 [2024-07-15 20:56:59.560180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750834 ] 00:06:32.429 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.429 [2024-07-15 20:56:59.652443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.690 [2024-07-15 20:56:59.725040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.690 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.691 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.691 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.691 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:32.691 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.691 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.691 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.691 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.691 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.691 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.691 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.691 20:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.691 20:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.691 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.691 20:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:33.632 20:57:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.632 00:06:33.632 real 0m1.324s 00:06:33.632 user 0m1.206s 00:06:33.632 sys 0m0.130s 00:06:33.632 20:57:00 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.632 20:57:00 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:33.632 ************************************ 00:06:33.632 END TEST accel_xor 00:06:33.632 ************************************ 00:06:33.632 20:57:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.632 20:57:00 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:33.633 20:57:00 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:33.633 20:57:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.633 20:57:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.895 ************************************ 00:06:33.895 START TEST accel_dif_verify 00:06:33.895 ************************************ 00:06:33.895 20:57:00 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:33.895 20:57:00 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:33.895 20:57:00 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:33.895 20:57:00 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:33.895 20:57:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.895 20:57:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.895 20:57:00 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:33.895 20:57:00 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:33.895 20:57:00 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.895 20:57:00 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.895 20:57:00 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.895 20:57:00 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.895 20:57:00 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.895 20:57:00 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:33.895 20:57:00 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:33.895 [2024-07-15 20:57:00.942497] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:33.895 [2024-07-15 20:57:00.942543] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751211 ] 00:06:33.895 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.895 [2024-07-15 20:57:01.007236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.895 [2024-07-15 20:57:01.072348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.895 20:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:35.307 20:57:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.307 00:06:35.307 real 0m1.275s 00:06:35.307 user 0m1.184s 00:06:35.307 sys 0m0.103s 00:06:35.307 20:57:02 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.307 20:57:02 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:35.307 ************************************ 00:06:35.307 END TEST accel_dif_verify 00:06:35.307 ************************************ 00:06:35.307 20:57:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.307 20:57:02 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:35.307 20:57:02 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:35.307 20:57:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.307 20:57:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.307 ************************************ 00:06:35.307 START TEST accel_dif_generate 00:06:35.307 ************************************ 00:06:35.307 20:57:02 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:35.307 20:57:02 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:35.307 20:57:02 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:35.307 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.307 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.307 20:57:02 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:35.307 20:57:02 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:35.307 20:57:02 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:35.307 20:57:02 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.307 20:57:02 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.307 20:57:02 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.307 20:57:02 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.307 20:57:02 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.307 20:57:02 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:35.307 20:57:02 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:35.307 [2024-07-15 20:57:02.307961] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:35.307 [2024-07-15 20:57:02.308049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751457 ] 00:06:35.307 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.307 [2024-07-15 20:57:02.377005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.307 [2024-07-15 20:57:02.442884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.308 20:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:36.313 20:57:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.313 00:06:36.313 real 0m1.295s 00:06:36.313 user 0m1.198s 00:06:36.313 sys 0m0.111s 00:06:36.313 20:57:03 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.313 20:57:03 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:36.313 ************************************ 00:06:36.313 END TEST accel_dif_generate 00:06:36.313 ************************************ 00:06:36.573 20:57:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.573 20:57:03 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:36.573 20:57:03 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:36.573 20:57:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.574 20:57:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.574 ************************************ 00:06:36.574 START TEST accel_dif_generate_copy 00:06:36.574 ************************************ 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:36.574 [2024-07-15 20:57:03.676626] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:36.574 [2024-07-15 20:57:03.676702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751688 ] 00:06:36.574 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.574 [2024-07-15 20:57:03.746783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.574 [2024-07-15 20:57:03.818683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.574 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.835 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.835 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.835 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.835 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.835 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:36.835 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.835 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.835 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.835 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.835 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.835 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.835 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.835 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.835 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.835 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.835 20:57:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.779 00:06:37.779 real 0m1.300s 00:06:37.779 user 0m1.202s 00:06:37.779 sys 0m0.110s 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.779 20:57:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:37.779 ************************************ 00:06:37.779 END TEST accel_dif_generate_copy 00:06:37.779 ************************************ 00:06:37.779 20:57:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.779 20:57:04 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:37.779 20:57:04 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.779 20:57:04 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:37.779 20:57:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.779 20:57:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.779 ************************************ 00:06:37.779 START TEST accel_comp 00:06:37.779 ************************************ 00:06:37.779 20:57:05 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.779 20:57:05 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:37.779 20:57:05 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:37.779 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.779 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.779 20:57:05 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.779 20:57:05 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.779 20:57:05 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:37.779 20:57:05 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.779 20:57:05 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.779 20:57:05 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.779 20:57:05 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.779 20:57:05 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.779 20:57:05 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:37.779 20:57:05 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:37.779 [2024-07-15 20:57:05.053465] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:37.779 [2024-07-15 20:57:05.053563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752038 ] 00:06:38.041 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.041 [2024-07-15 20:57:05.124003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.041 [2024-07-15 20:57:05.195607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.041 20:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:39.427 20:57:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.428 20:57:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.428 20:57:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:39.428 20:57:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.428 00:06:39.428 real 0m1.305s 00:06:39.428 user 0m1.196s 00:06:39.428 sys 0m0.120s 00:06:39.428 20:57:06 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.428 20:57:06 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:39.428 ************************************ 00:06:39.428 END TEST accel_comp 00:06:39.428 ************************************ 00:06:39.428 20:57:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.428 20:57:06 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.428 20:57:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:39.428 20:57:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.428 20:57:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.428 ************************************ 00:06:39.428 START TEST accel_decomp 00:06:39.428 ************************************ 00:06:39.428 20:57:06 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:39.428 [2024-07-15 20:57:06.427351] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:39.428 [2024-07-15 20:57:06.427417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752385 ] 00:06:39.428 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.428 [2024-07-15 20:57:06.506812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.428 [2024-07-15 20:57:06.575656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.428 20:57:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:40.814 20:57:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.814 00:06:40.814 real 0m1.309s 00:06:40.814 user 0m1.205s 00:06:40.814 sys 0m0.116s 00:06:40.814 20:57:07 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.814 20:57:07 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:40.814 ************************************ 00:06:40.814 END TEST accel_decomp 00:06:40.814 ************************************ 00:06:40.814 20:57:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.814 20:57:07 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:40.814 20:57:07 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:40.814 20:57:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.814 20:57:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.814 ************************************ 00:06:40.814 START TEST accel_decomp_full 00:06:40.814 ************************************ 00:06:40.814 20:57:07 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:40.814 20:57:07 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:40.814 20:57:07 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:40.814 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.814 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.814 20:57:07 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:40.814 20:57:07 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:40.814 20:57:07 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:40.814 20:57:07 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.814 20:57:07 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.814 20:57:07 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.814 20:57:07 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.814 20:57:07 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.814 20:57:07 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:40.814 20:57:07 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:40.814 [2024-07-15 20:57:07.810949] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:40.814 [2024-07-15 20:57:07.811010] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752741 ] 00:06:40.814 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.814 [2024-07-15 20:57:07.879734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.814 [2024-07-15 20:57:07.948200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.814 20:57:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.814 20:57:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.814 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.815 20:57:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:42.198 20:57:09 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.198 00:06:42.198 real 0m1.311s 00:06:42.198 user 0m1.212s 00:06:42.198 sys 0m0.112s 00:06:42.198 20:57:09 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.198 20:57:09 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:42.198 ************************************ 00:06:42.198 END TEST accel_decomp_full 00:06:42.198 ************************************ 00:06:42.198 20:57:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.198 20:57:09 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:42.198 20:57:09 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:42.198 20:57:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.198 20:57:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.198 ************************************ 00:06:42.198 START TEST accel_decomp_mcore 00:06:42.198 ************************************ 00:06:42.198 20:57:09 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:42.198 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:42.198 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:42.198 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.198 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.198 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:42.198 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:42.198 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:42.198 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.198 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.198 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.198 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.198 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.198 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:42.198 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:42.198 [2024-07-15 20:57:09.196081] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:42.198 [2024-07-15 20:57:09.196145] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753167 ] 00:06:42.198 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.199 [2024-07-15 20:57:09.283133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.199 [2024-07-15 20:57:09.361110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.199 [2024-07-15 20:57:09.361264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.199 [2024-07-15 20:57:09.361399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.199 [2024-07-15 20:57:09.361400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.199 20:57:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.583 00:06:43.583 real 0m1.333s 00:06:43.583 user 0m4.452s 00:06:43.583 sys 0m0.125s 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.583 20:57:10 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:43.583 ************************************ 00:06:43.583 END TEST accel_decomp_mcore 00:06:43.583 ************************************ 00:06:43.583 20:57:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.583 20:57:10 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.583 20:57:10 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:43.583 20:57:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.583 20:57:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.583 ************************************ 00:06:43.583 START TEST accel_decomp_full_mcore 00:06:43.583 ************************************ 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:43.583 [2024-07-15 20:57:10.603159] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:43.583 [2024-07-15 20:57:10.603256] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753585 ] 00:06:43.583 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.583 [2024-07-15 20:57:10.673462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:43.583 [2024-07-15 20:57:10.743843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.583 [2024-07-15 20:57:10.743959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.583 [2024-07-15 20:57:10.744120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.583 [2024-07-15 20:57:10.744122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.583 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.584 20:57:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.969 00:06:44.969 real 0m1.321s 00:06:44.969 user 0m4.483s 00:06:44.969 sys 0m0.122s 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.969 20:57:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:44.969 ************************************ 00:06:44.969 END TEST accel_decomp_full_mcore 00:06:44.969 ************************************ 00:06:44.969 20:57:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.969 20:57:11 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:44.969 20:57:11 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:44.969 20:57:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.969 20:57:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.969 ************************************ 00:06:44.969 START TEST accel_decomp_mthread 00:06:44.969 ************************************ 00:06:44.969 20:57:11 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:44.969 20:57:11 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:44.969 20:57:11 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:44.969 20:57:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.969 20:57:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.969 20:57:11 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:44.969 20:57:11 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:44.969 20:57:11 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:44.969 20:57:11 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.969 20:57:11 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.969 20:57:11 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.969 20:57:11 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.969 20:57:11 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.969 20:57:11 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:44.969 20:57:11 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:44.969 [2024-07-15 20:57:11.998567] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:44.969 [2024-07-15 20:57:11.998669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753934 ] 00:06:44.969 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.969 [2024-07-15 20:57:12.068829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.969 [2024-07-15 20:57:12.138112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.969 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.969 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.969 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.969 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.969 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.969 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.969 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.969 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.969 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.969 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.969 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.969 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.970 20:57:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.357 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.358 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.358 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:46.358 20:57:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.358 00:06:46.358 real 0m1.308s 00:06:46.358 user 0m1.209s 00:06:46.358 sys 0m0.111s 00:06:46.358 20:57:13 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.358 20:57:13 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:46.358 ************************************ 00:06:46.358 END TEST accel_decomp_mthread 00:06:46.358 ************************************ 00:06:46.358 20:57:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.358 20:57:13 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:46.358 20:57:13 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:46.358 20:57:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.358 20:57:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.358 ************************************ 00:06:46.358 START TEST accel_decomp_full_mthread 00:06:46.358 ************************************ 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:46.358 [2024-07-15 20:57:13.381433] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:46.358 [2024-07-15 20:57:13.381498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754282 ] 00:06:46.358 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.358 [2024-07-15 20:57:13.451051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.358 [2024-07-15 20:57:13.521172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 20:57:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.744 00:06:47.744 real 0m1.332s 00:06:47.744 user 0m1.236s 00:06:47.744 sys 0m0.108s 00:06:47.744 20:57:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.745 20:57:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:47.745 ************************************ 00:06:47.745 END TEST accel_decomp_full_mthread 00:06:47.745 ************************************ 00:06:47.745 20:57:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.745 20:57:14 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:47.745 20:57:14 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:47.745 20:57:14 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:47.745 20:57:14 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:47.745 20:57:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.745 20:57:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.745 20:57:14 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.745 20:57:14 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.745 20:57:14 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.745 20:57:14 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.745 20:57:14 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.745 20:57:14 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:47.745 20:57:14 accel -- accel/accel.sh@41 -- # jq -r . 00:06:47.745 ************************************ 00:06:47.745 START TEST accel_dif_functional_tests 00:06:47.745 ************************************ 00:06:47.745 20:57:14 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:47.745 [2024-07-15 20:57:14.818976] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:47.745 [2024-07-15 20:57:14.819026] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754632 ] 00:06:47.745 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.745 [2024-07-15 20:57:14.888981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.745 [2024-07-15 20:57:14.963015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.745 [2024-07-15 20:57:14.963136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.745 [2024-07-15 20:57:14.963139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.745 00:06:47.745 00:06:47.745 CUnit - A unit testing framework for C - Version 2.1-3 00:06:47.745 http://cunit.sourceforge.net/ 00:06:47.745 00:06:47.745 00:06:47.745 Suite: accel_dif 00:06:47.745 Test: verify: DIF generated, GUARD check ...passed 00:06:47.745 Test: verify: DIF generated, APPTAG check ...passed 00:06:47.745 Test: verify: DIF generated, REFTAG check ...passed 00:06:47.745 Test: verify: DIF not generated, GUARD check ...[2024-07-15 20:57:15.018953] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:47.745 passed 00:06:47.745 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 20:57:15.018995] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:47.745 passed 00:06:47.745 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 20:57:15.019017] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:47.745 passed 00:06:47.745 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:47.745 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 20:57:15.019066] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:47.745 passed 00:06:47.745 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:47.745 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:47.745 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:47.745 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 20:57:15.019179] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:47.745 passed 00:06:47.745 Test: verify copy: DIF generated, GUARD check ...passed 00:06:47.745 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:47.745 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:47.745 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 20:57:15.019306] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:47.745 passed 00:06:47.745 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 20:57:15.019328] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:47.745 passed 00:06:47.745 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 20:57:15.019350] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:47.745 passed 00:06:47.745 Test: generate copy: DIF generated, GUARD check ...passed 00:06:47.745 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:47.745 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:47.745 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:47.745 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:47.745 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:47.745 Test: generate copy: iovecs-len validate ...[2024-07-15 20:57:15.019537] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:47.745 passed 00:06:47.745 Test: generate copy: buffer alignment validate ...passed 00:06:47.745 00:06:47.745 Run Summary: Type Total Ran Passed Failed Inactive 00:06:47.745 suites 1 1 n/a 0 0 00:06:47.745 tests 26 26 26 0 0 00:06:47.745 asserts 115 115 115 0 n/a 00:06:47.745 00:06:47.745 Elapsed time = 0.002 seconds 00:06:48.006 00:06:48.006 real 0m0.374s 00:06:48.006 user 0m0.510s 00:06:48.006 sys 0m0.129s 00:06:48.006 20:57:15 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.006 20:57:15 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:48.006 ************************************ 00:06:48.006 END TEST accel_dif_functional_tests 00:06:48.006 ************************************ 00:06:48.006 20:57:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.006 00:06:48.006 real 0m30.323s 00:06:48.006 user 0m33.747s 00:06:48.006 sys 0m4.301s 00:06:48.006 20:57:15 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.006 20:57:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.006 ************************************ 00:06:48.006 END TEST accel 00:06:48.006 ************************************ 00:06:48.006 20:57:15 -- common/autotest_common.sh@1142 -- # return 0 00:06:48.006 20:57:15 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:48.006 20:57:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.006 20:57:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.006 20:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:48.006 ************************************ 00:06:48.006 START TEST accel_rpc 00:06:48.006 ************************************ 00:06:48.006 20:57:15 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:48.267 * Looking for test storage... 00:06:48.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:48.267 20:57:15 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:48.267 20:57:15 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1754703 00:06:48.267 20:57:15 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1754703 00:06:48.267 20:57:15 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:48.267 20:57:15 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1754703 ']' 00:06:48.267 20:57:15 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.267 20:57:15 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.267 20:57:15 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.267 20:57:15 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.267 20:57:15 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.267 [2024-07-15 20:57:15.409869] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:48.267 [2024-07-15 20:57:15.409943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754703 ] 00:06:48.267 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.267 [2024-07-15 20:57:15.484201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.528 [2024-07-15 20:57:15.557836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.101 20:57:16 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.101 20:57:16 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:49.101 20:57:16 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:49.101 20:57:16 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:49.101 20:57:16 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:49.101 20:57:16 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:49.101 20:57:16 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:49.101 20:57:16 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.101 20:57:16 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.101 20:57:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.101 ************************************ 00:06:49.101 START TEST accel_assign_opcode 00:06:49.101 ************************************ 00:06:49.101 20:57:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:49.101 20:57:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:49.101 20:57:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.101 20:57:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:49.101 [2024-07-15 20:57:16.211837] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:49.101 20:57:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.101 20:57:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:49.101 20:57:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.101 20:57:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:49.101 [2024-07-15 20:57:16.223855] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:49.101 20:57:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.101 20:57:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:49.101 20:57:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.101 20:57:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:49.101 20:57:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.101 20:57:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:49.101 20:57:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:49.101 20:57:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.101 20:57:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:49.101 20:57:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:49.101 20:57:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.362 software 00:06:49.362 00:06:49.362 real 0m0.208s 00:06:49.362 user 0m0.051s 00:06:49.362 sys 0m0.010s 00:06:49.362 20:57:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.362 20:57:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:49.362 ************************************ 00:06:49.362 END TEST accel_assign_opcode 00:06:49.362 ************************************ 00:06:49.362 20:57:16 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:49.362 20:57:16 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1754703 00:06:49.362 20:57:16 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1754703 ']' 00:06:49.362 20:57:16 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1754703 00:06:49.362 20:57:16 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:49.362 20:57:16 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.362 20:57:16 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1754703 00:06:49.362 20:57:16 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.362 20:57:16 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.362 20:57:16 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1754703' 00:06:49.362 killing process with pid 1754703 00:06:49.362 20:57:16 accel_rpc -- common/autotest_common.sh@967 -- # kill 1754703 00:06:49.362 20:57:16 accel_rpc -- common/autotest_common.sh@972 -- # wait 1754703 00:06:49.623 00:06:49.623 real 0m1.462s 00:06:49.623 user 0m1.525s 00:06:49.623 sys 0m0.428s 00:06:49.623 20:57:16 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.623 20:57:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.623 ************************************ 00:06:49.623 END TEST accel_rpc 00:06:49.623 ************************************ 00:06:49.623 20:57:16 -- common/autotest_common.sh@1142 -- # return 0 00:06:49.623 20:57:16 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:49.623 20:57:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.623 20:57:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.623 20:57:16 -- common/autotest_common.sh@10 -- # set +x 00:06:49.623 ************************************ 00:06:49.623 START TEST app_cmdline 00:06:49.623 ************************************ 00:06:49.623 20:57:16 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:49.623 * Looking for test storage... 00:06:49.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:49.623 20:57:16 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:49.623 20:57:16 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1755106 00:06:49.623 20:57:16 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1755106 00:06:49.623 20:57:16 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:49.623 20:57:16 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1755106 ']' 00:06:49.623 20:57:16 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.623 20:57:16 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.623 20:57:16 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.623 20:57:16 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.623 20:57:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.884 [2024-07-15 20:57:16.946947] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:49.884 [2024-07-15 20:57:16.947049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755106 ] 00:06:49.884 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.884 [2024-07-15 20:57:17.017493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.884 [2024-07-15 20:57:17.083532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.454 20:57:17 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.454 20:57:17 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:50.454 20:57:17 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:50.715 { 00:06:50.715 "version": "SPDK v24.09-pre git sha1 cdc37ee83", 00:06:50.715 "fields": { 00:06:50.715 "major": 24, 00:06:50.715 "minor": 9, 00:06:50.715 "patch": 0, 00:06:50.715 "suffix": "-pre", 00:06:50.715 "commit": "cdc37ee83" 00:06:50.715 } 00:06:50.715 } 00:06:50.715 20:57:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:50.715 20:57:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:50.715 20:57:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:50.715 20:57:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:50.715 20:57:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:50.715 20:57:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:50.715 20:57:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:50.715 20:57:17 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.715 20:57:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:50.715 20:57:17 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.715 20:57:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:50.715 20:57:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:50.715 20:57:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:50.715 20:57:17 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:50.715 20:57:17 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:50.715 20:57:17 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:50.715 20:57:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.715 20:57:17 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:50.715 20:57:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.715 20:57:17 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:50.715 20:57:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.715 20:57:17 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:50.715 20:57:17 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:50.715 20:57:17 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:50.975 request: 00:06:50.975 { 00:06:50.975 "method": "env_dpdk_get_mem_stats", 00:06:50.975 "req_id": 1 00:06:50.975 } 00:06:50.975 Got JSON-RPC error response 00:06:50.975 response: 00:06:50.975 { 00:06:50.975 "code": -32601, 00:06:50.975 "message": "Method not found" 00:06:50.975 } 00:06:50.975 20:57:18 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:50.975 20:57:18 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.975 20:57:18 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:50.975 20:57:18 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.975 20:57:18 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1755106 00:06:50.975 20:57:18 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1755106 ']' 00:06:50.975 20:57:18 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1755106 00:06:50.975 20:57:18 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:50.975 20:57:18 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.975 20:57:18 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1755106 00:06:50.975 20:57:18 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.976 20:57:18 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.976 20:57:18 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1755106' 00:06:50.976 killing process with pid 1755106 00:06:50.976 20:57:18 app_cmdline -- common/autotest_common.sh@967 -- # kill 1755106 00:06:50.976 20:57:18 app_cmdline -- common/autotest_common.sh@972 -- # wait 1755106 00:06:51.236 00:06:51.236 real 0m1.575s 00:06:51.236 user 0m1.901s 00:06:51.236 sys 0m0.407s 00:06:51.236 20:57:18 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.236 20:57:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:51.236 ************************************ 00:06:51.236 END TEST app_cmdline 00:06:51.236 ************************************ 00:06:51.236 20:57:18 -- common/autotest_common.sh@1142 -- # return 0 00:06:51.236 20:57:18 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:51.236 20:57:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.236 20:57:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.236 20:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:51.236 ************************************ 00:06:51.236 START TEST version 00:06:51.236 ************************************ 00:06:51.236 20:57:18 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:51.497 * Looking for test storage... 00:06:51.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:51.497 20:57:18 version -- app/version.sh@17 -- # get_header_version major 00:06:51.497 20:57:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:51.497 20:57:18 version -- app/version.sh@14 -- # cut -f2 00:06:51.497 20:57:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:51.497 20:57:18 version -- app/version.sh@17 -- # major=24 00:06:51.497 20:57:18 version -- app/version.sh@18 -- # get_header_version minor 00:06:51.497 20:57:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:51.497 20:57:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:51.497 20:57:18 version -- app/version.sh@14 -- # cut -f2 00:06:51.497 20:57:18 version -- app/version.sh@18 -- # minor=9 00:06:51.497 20:57:18 version -- app/version.sh@19 -- # get_header_version patch 00:06:51.497 20:57:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:51.497 20:57:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:51.497 20:57:18 version -- app/version.sh@14 -- # cut -f2 00:06:51.497 20:57:18 version -- app/version.sh@19 -- # patch=0 00:06:51.497 20:57:18 version -- app/version.sh@20 -- # get_header_version suffix 00:06:51.497 20:57:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:51.497 20:57:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:51.497 20:57:18 version -- app/version.sh@14 -- # cut -f2 00:06:51.497 20:57:18 version -- app/version.sh@20 -- # suffix=-pre 00:06:51.497 20:57:18 version -- app/version.sh@22 -- # version=24.9 00:06:51.497 20:57:18 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:51.497 20:57:18 version -- app/version.sh@28 -- # version=24.9rc0 00:06:51.497 20:57:18 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:51.497 20:57:18 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:51.497 20:57:18 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:51.497 20:57:18 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:51.497 00:06:51.497 real 0m0.170s 00:06:51.497 user 0m0.081s 00:06:51.497 sys 0m0.120s 00:06:51.497 20:57:18 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.497 20:57:18 version -- common/autotest_common.sh@10 -- # set +x 00:06:51.497 ************************************ 00:06:51.497 END TEST version 00:06:51.497 ************************************ 00:06:51.497 20:57:18 -- common/autotest_common.sh@1142 -- # return 0 00:06:51.497 20:57:18 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:51.497 20:57:18 -- spdk/autotest.sh@198 -- # uname -s 00:06:51.497 20:57:18 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:51.497 20:57:18 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:51.497 20:57:18 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:51.497 20:57:18 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:51.497 20:57:18 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:51.497 20:57:18 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:51.497 20:57:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:51.497 20:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:51.497 20:57:18 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:51.497 20:57:18 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:51.497 20:57:18 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:51.497 20:57:18 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:51.497 20:57:18 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:51.497 20:57:18 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:51.497 20:57:18 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:51.497 20:57:18 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:51.497 20:57:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.497 20:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:51.497 ************************************ 00:06:51.497 START TEST nvmf_tcp 00:06:51.497 ************************************ 00:06:51.497 20:57:18 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:51.758 * Looking for test storage... 00:06:51.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:51.758 20:57:18 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.758 20:57:18 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.758 20:57:18 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.758 20:57:18 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.758 20:57:18 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.758 20:57:18 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.758 20:57:18 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:51.758 20:57:18 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:51.758 20:57:18 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:51.758 20:57:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:51.758 20:57:18 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:51.758 20:57:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:51.758 20:57:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.758 20:57:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:51.758 ************************************ 00:06:51.758 START TEST nvmf_example 00:06:51.758 ************************************ 00:06:51.758 20:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:51.758 * Looking for test storage... 00:06:51.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:51.758 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:52.019 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:52.019 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:52.019 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.019 20:57:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:52.019 20:57:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.019 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:52.019 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:52.019 20:57:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:52.019 20:57:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.157 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:00.158 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:00.158 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:00.158 Found net devices under 0000:31:00.0: cvl_0_0 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:00.158 Found net devices under 0000:31:00.1: cvl_0_1 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:00.158 20:57:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:00.158 20:57:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:00.158 20:57:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:00.158 20:57:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:00.158 20:57:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:00.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.866 ms 00:07:00.158 00:07:00.158 --- 10.0.0.2 ping statistics --- 00:07:00.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.158 rtt min/avg/max/mdev = 0.866/0.866/0.866/0.000 ms 00:07:00.158 20:57:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:00.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.361 ms 00:07:00.158 00:07:00.158 --- 10.0.0.1 ping statistics --- 00:07:00.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.158 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:07:00.158 20:57:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.158 20:57:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:00.158 20:57:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:00.158 20:57:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.158 20:57:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:00.158 20:57:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:00.158 20:57:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.158 20:57:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:00.158 20:57:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:00.158 20:57:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:00.158 20:57:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:00.158 20:57:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:00.158 20:57:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:00.158 20:57:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:00.158 20:57:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:00.159 20:57:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1759891 00:07:00.159 20:57:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:00.159 20:57:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:00.159 20:57:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1759891 00:07:00.159 20:57:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1759891 ']' 00:07:00.159 20:57:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.159 20:57:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.159 20:57:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.159 20:57:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.159 20:57:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:00.159 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:01.099 20:57:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:01.099 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.095 Initializing NVMe Controllers 00:07:11.095 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:11.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:11.095 Initialization complete. Launching workers. 00:07:11.095 ======================================================== 00:07:11.095 Latency(us) 00:07:11.095 Device Information : IOPS MiB/s Average min max 00:07:11.095 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19128.51 74.72 3345.38 592.28 16316.66 00:07:11.095 ======================================================== 00:07:11.095 Total : 19128.51 74.72 3345.38 592.28 16316.66 00:07:11.095 00:07:11.095 20:57:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:11.095 20:57:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:11.095 20:57:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:11.095 20:57:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:11.095 20:57:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:11.095 20:57:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:11.095 20:57:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:11.095 20:57:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:11.095 rmmod nvme_tcp 00:07:11.095 rmmod nvme_fabrics 00:07:11.095 rmmod nvme_keyring 00:07:11.095 20:57:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:11.095 20:57:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:11.095 20:57:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:11.095 20:57:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1759891 ']' 00:07:11.095 20:57:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1759891 00:07:11.095 20:57:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1759891 ']' 00:07:11.095 20:57:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1759891 00:07:11.095 20:57:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:11.355 20:57:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.355 20:57:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1759891 00:07:11.355 20:57:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:11.355 20:57:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:11.355 20:57:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1759891' 00:07:11.355 killing process with pid 1759891 00:07:11.355 20:57:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1759891 00:07:11.355 20:57:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1759891 00:07:11.355 nvmf threads initialize successfully 00:07:11.355 bdev subsystem init successfully 00:07:11.355 created a nvmf target service 00:07:11.355 create targets's poll groups done 00:07:11.355 all subsystems of target started 00:07:11.355 nvmf target is running 00:07:11.355 all subsystems of target stopped 00:07:11.355 destroy targets's poll groups done 00:07:11.355 destroyed the nvmf target service 00:07:11.355 bdev subsystem finish successfully 00:07:11.355 nvmf threads destroy successfully 00:07:11.355 20:57:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:11.355 20:57:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:11.355 20:57:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:11.355 20:57:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:11.355 20:57:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:11.355 20:57:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.355 20:57:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.355 20:57:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.953 20:57:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:13.953 20:57:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:13.953 20:57:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:13.953 20:57:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:13.953 00:07:13.953 real 0m21.765s 00:07:13.953 user 0m46.037s 00:07:13.953 sys 0m7.291s 00:07:13.953 20:57:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.953 20:57:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:13.953 ************************************ 00:07:13.953 END TEST nvmf_example 00:07:13.953 ************************************ 00:07:13.953 20:57:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:13.953 20:57:40 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:13.953 20:57:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:13.953 20:57:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.953 20:57:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:13.953 ************************************ 00:07:13.953 START TEST nvmf_filesystem 00:07:13.953 ************************************ 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:13.953 * Looking for test storage... 00:07:13.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:13.953 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:13.954 #define SPDK_CONFIG_H 00:07:13.954 #define SPDK_CONFIG_APPS 1 00:07:13.954 #define SPDK_CONFIG_ARCH native 00:07:13.954 #undef SPDK_CONFIG_ASAN 00:07:13.954 #undef SPDK_CONFIG_AVAHI 00:07:13.954 #undef SPDK_CONFIG_CET 00:07:13.954 #define SPDK_CONFIG_COVERAGE 1 00:07:13.954 #define SPDK_CONFIG_CROSS_PREFIX 00:07:13.954 #undef SPDK_CONFIG_CRYPTO 00:07:13.954 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:13.954 #undef SPDK_CONFIG_CUSTOMOCF 00:07:13.954 #undef SPDK_CONFIG_DAOS 00:07:13.954 #define SPDK_CONFIG_DAOS_DIR 00:07:13.954 #define SPDK_CONFIG_DEBUG 1 00:07:13.954 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:13.954 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:13.954 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:13.954 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:13.954 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:13.954 #undef SPDK_CONFIG_DPDK_UADK 00:07:13.954 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:13.954 #define SPDK_CONFIG_EXAMPLES 1 00:07:13.954 #undef SPDK_CONFIG_FC 00:07:13.954 #define SPDK_CONFIG_FC_PATH 00:07:13.954 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:13.954 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:13.954 #undef SPDK_CONFIG_FUSE 00:07:13.954 #undef SPDK_CONFIG_FUZZER 00:07:13.954 #define SPDK_CONFIG_FUZZER_LIB 00:07:13.954 #undef SPDK_CONFIG_GOLANG 00:07:13.954 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:13.954 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:13.954 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:13.954 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:13.954 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:13.954 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:13.954 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:13.954 #define SPDK_CONFIG_IDXD 1 00:07:13.954 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:13.954 #undef SPDK_CONFIG_IPSEC_MB 00:07:13.954 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:13.954 #define SPDK_CONFIG_ISAL 1 00:07:13.954 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:13.954 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:13.954 #define SPDK_CONFIG_LIBDIR 00:07:13.954 #undef SPDK_CONFIG_LTO 00:07:13.954 #define SPDK_CONFIG_MAX_LCORES 128 00:07:13.954 #define SPDK_CONFIG_NVME_CUSE 1 00:07:13.954 #undef SPDK_CONFIG_OCF 00:07:13.954 #define SPDK_CONFIG_OCF_PATH 00:07:13.954 #define SPDK_CONFIG_OPENSSL_PATH 00:07:13.954 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:13.954 #define SPDK_CONFIG_PGO_DIR 00:07:13.954 #undef SPDK_CONFIG_PGO_USE 00:07:13.954 #define SPDK_CONFIG_PREFIX /usr/local 00:07:13.954 #undef SPDK_CONFIG_RAID5F 00:07:13.954 #undef SPDK_CONFIG_RBD 00:07:13.954 #define SPDK_CONFIG_RDMA 1 00:07:13.954 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:13.954 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:13.954 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:13.954 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:13.954 #define SPDK_CONFIG_SHARED 1 00:07:13.954 #undef SPDK_CONFIG_SMA 00:07:13.954 #define SPDK_CONFIG_TESTS 1 00:07:13.954 #undef SPDK_CONFIG_TSAN 00:07:13.954 #define SPDK_CONFIG_UBLK 1 00:07:13.954 #define SPDK_CONFIG_UBSAN 1 00:07:13.954 #undef SPDK_CONFIG_UNIT_TESTS 00:07:13.954 #undef SPDK_CONFIG_URING 00:07:13.954 #define SPDK_CONFIG_URING_PATH 00:07:13.954 #undef SPDK_CONFIG_URING_ZNS 00:07:13.954 #undef SPDK_CONFIG_USDT 00:07:13.954 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:13.954 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:13.954 #define SPDK_CONFIG_VFIO_USER 1 00:07:13.954 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:13.954 #define SPDK_CONFIG_VHOST 1 00:07:13.954 #define SPDK_CONFIG_VIRTIO 1 00:07:13.954 #undef SPDK_CONFIG_VTUNE 00:07:13.954 #define SPDK_CONFIG_VTUNE_DIR 00:07:13.954 #define SPDK_CONFIG_WERROR 1 00:07:13.954 #define SPDK_CONFIG_WPDK_DIR 00:07:13.954 #undef SPDK_CONFIG_XNVME 00:07:13.954 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:13.954 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:13.955 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1762691 ]] 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1762691 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.zBIkp3 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.zBIkp3/tests/target /tmp/spdk.zBIkp3 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=956157952 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4328271872 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=122796167168 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370980352 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6574813184 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680779776 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864253440 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874198528 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9945088 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=179200 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=324608 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64683663360 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1826816 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:13.956 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:13.957 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937093120 00:07:13.957 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937097216 00:07:13.957 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:13.957 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:13.957 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:13.957 * Looking for test storage... 00:07:13.957 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:13.957 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:13.957 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.957 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:13.957 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:13.957 20:57:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=122796167168 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8789405696 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:13.957 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:13.958 20:57:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:13.958 20:57:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.098 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:22.098 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:22.098 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:22.098 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:22.098 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:22.098 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:22.098 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:22.098 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:22.098 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:22.098 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:22.098 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:22.098 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:22.098 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:22.098 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:22.098 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:22.098 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:22.098 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:22.098 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:22.099 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:22.099 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:22.099 Found net devices under 0000:31:00.0: cvl_0_0 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:22.099 Found net devices under 0000:31:00.1: cvl_0_1 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:22.099 20:57:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:22.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:07:22.099 00:07:22.099 --- 10.0.0.2 ping statistics --- 00:07:22.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.099 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:22.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:22.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:07:22.099 00:07:22.099 --- 10.0.0.1 ping statistics --- 00:07:22.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.099 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.099 ************************************ 00:07:22.099 START TEST nvmf_filesystem_no_in_capsule 00:07:22.099 ************************************ 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1766991 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1766991 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1766991 ']' 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.099 20:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.099 [2024-07-15 20:57:49.302019] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:22.099 [2024-07-15 20:57:49.302066] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.099 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.099 [2024-07-15 20:57:49.376137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.360 [2024-07-15 20:57:49.444041] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.360 [2024-07-15 20:57:49.444080] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.360 [2024-07-15 20:57:49.444088] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:22.360 [2024-07-15 20:57:49.444095] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:22.360 [2024-07-15 20:57:49.444100] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.360 [2024-07-15 20:57:49.444267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.360 [2024-07-15 20:57:49.444475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.360 [2024-07-15 20:57:49.444302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.360 [2024-07-15 20:57:49.444475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.932 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.932 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:22.932 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:22.932 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:22.932 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.932 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.932 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:22.932 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:22.932 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.932 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.932 [2024-07-15 20:57:50.121922] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.932 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.932 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:22.932 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.932 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.932 Malloc1 00:07:22.932 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.932 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:22.932 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.932 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.193 [2024-07-15 20:57:50.251301] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:23.193 { 00:07:23.193 "name": "Malloc1", 00:07:23.193 "aliases": [ 00:07:23.193 "dea467b9-e78b-4ad7-9de5-78b198df0287" 00:07:23.193 ], 00:07:23.193 "product_name": "Malloc disk", 00:07:23.193 "block_size": 512, 00:07:23.193 "num_blocks": 1048576, 00:07:23.193 "uuid": "dea467b9-e78b-4ad7-9de5-78b198df0287", 00:07:23.193 "assigned_rate_limits": { 00:07:23.193 "rw_ios_per_sec": 0, 00:07:23.193 "rw_mbytes_per_sec": 0, 00:07:23.193 "r_mbytes_per_sec": 0, 00:07:23.193 "w_mbytes_per_sec": 0 00:07:23.193 }, 00:07:23.193 "claimed": true, 00:07:23.193 "claim_type": "exclusive_write", 00:07:23.193 "zoned": false, 00:07:23.193 "supported_io_types": { 00:07:23.193 "read": true, 00:07:23.193 "write": true, 00:07:23.193 "unmap": true, 00:07:23.193 "flush": true, 00:07:23.193 "reset": true, 00:07:23.193 "nvme_admin": false, 00:07:23.193 "nvme_io": false, 00:07:23.193 "nvme_io_md": false, 00:07:23.193 "write_zeroes": true, 00:07:23.193 "zcopy": true, 00:07:23.193 "get_zone_info": false, 00:07:23.193 "zone_management": false, 00:07:23.193 "zone_append": false, 00:07:23.193 "compare": false, 00:07:23.193 "compare_and_write": false, 00:07:23.193 "abort": true, 00:07:23.193 "seek_hole": false, 00:07:23.193 "seek_data": false, 00:07:23.193 "copy": true, 00:07:23.193 "nvme_iov_md": false 00:07:23.193 }, 00:07:23.193 "memory_domains": [ 00:07:23.193 { 00:07:23.193 "dma_device_id": "system", 00:07:23.193 "dma_device_type": 1 00:07:23.193 }, 00:07:23.193 { 00:07:23.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.193 "dma_device_type": 2 00:07:23.193 } 00:07:23.193 ], 00:07:23.193 "driver_specific": {} 00:07:23.193 } 00:07:23.193 ]' 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:23.193 20:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:24.616 20:57:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:24.616 20:57:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:24.616 20:57:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:24.616 20:57:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:24.616 20:57:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:26.605 20:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:26.605 20:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:26.605 20:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:26.605 20:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:26.605 20:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:26.605 20:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:26.605 20:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:26.605 20:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:26.605 20:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:26.605 20:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:26.605 20:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:26.605 20:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:26.605 20:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:26.605 20:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:26.605 20:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:26.605 20:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:26.605 20:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:26.865 20:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:26.865 20:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:27.805 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:27.805 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:27.805 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:27.805 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.805 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.066 ************************************ 00:07:28.066 START TEST filesystem_ext4 00:07:28.066 ************************************ 00:07:28.066 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:28.066 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:28.066 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.066 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:28.066 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:28.066 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:28.066 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:28.066 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:28.066 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:28.066 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:28.066 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:28.066 mke2fs 1.46.5 (30-Dec-2021) 00:07:28.066 Discarding device blocks: 0/522240 done 00:07:28.066 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:28.066 Filesystem UUID: ffe1a348-7c6f-4c9f-baf8-24219b258a74 00:07:28.066 Superblock backups stored on blocks: 00:07:28.066 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:28.066 00:07:28.066 Allocating group tables: 0/64 done 00:07:28.066 Writing inode tables: 0/64 done 00:07:28.066 Creating journal (8192 blocks): done 00:07:28.066 Writing superblocks and filesystem accounting information: 0/64 done 00:07:28.066 00:07:28.066 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:28.066 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1766991 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.639 00:07:28.639 real 0m0.724s 00:07:28.639 user 0m0.025s 00:07:28.639 sys 0m0.048s 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:28.639 ************************************ 00:07:28.639 END TEST filesystem_ext4 00:07:28.639 ************************************ 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.639 ************************************ 00:07:28.639 START TEST filesystem_btrfs 00:07:28.639 ************************************ 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:28.639 20:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:29.211 btrfs-progs v6.6.2 00:07:29.211 See https://btrfs.readthedocs.io for more information. 00:07:29.211 00:07:29.211 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:29.211 NOTE: several default settings have changed in version 5.15, please make sure 00:07:29.211 this does not affect your deployments: 00:07:29.211 - DUP for metadata (-m dup) 00:07:29.211 - enabled no-holes (-O no-holes) 00:07:29.211 - enabled free-space-tree (-R free-space-tree) 00:07:29.211 00:07:29.211 Label: (null) 00:07:29.211 UUID: a681b5fa-2519-408b-8398-0d3edf6b7add 00:07:29.211 Node size: 16384 00:07:29.211 Sector size: 4096 00:07:29.211 Filesystem size: 510.00MiB 00:07:29.211 Block group profiles: 00:07:29.211 Data: single 8.00MiB 00:07:29.211 Metadata: DUP 32.00MiB 00:07:29.211 System: DUP 8.00MiB 00:07:29.211 SSD detected: yes 00:07:29.211 Zoned device: no 00:07:29.211 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:29.211 Runtime features: free-space-tree 00:07:29.211 Checksum: crc32c 00:07:29.211 Number of devices: 1 00:07:29.211 Devices: 00:07:29.211 ID SIZE PATH 00:07:29.211 1 510.00MiB /dev/nvme0n1p1 00:07:29.211 00:07:29.211 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:29.211 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1766991 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:29.473 00:07:29.473 real 0m0.674s 00:07:29.473 user 0m0.028s 00:07:29.473 sys 0m0.061s 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:29.473 ************************************ 00:07:29.473 END TEST filesystem_btrfs 00:07:29.473 ************************************ 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.473 ************************************ 00:07:29.473 START TEST filesystem_xfs 00:07:29.473 ************************************ 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:29.473 20:57:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:29.473 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:29.473 = sectsz=512 attr=2, projid32bit=1 00:07:29.473 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:29.473 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:29.473 data = bsize=4096 blocks=130560, imaxpct=25 00:07:29.473 = sunit=0 swidth=0 blks 00:07:29.473 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:29.473 log =internal log bsize=4096 blocks=16384, version=2 00:07:29.473 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:29.473 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:30.859 Discarding blocks...Done. 00:07:30.859 20:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:30.859 20:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:32.773 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:32.773 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:32.773 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:32.773 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:32.773 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1766991 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:32.774 00:07:32.774 real 0m2.930s 00:07:32.774 user 0m0.022s 00:07:32.774 sys 0m0.057s 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:32.774 ************************************ 00:07:32.774 END TEST filesystem_xfs 00:07:32.774 ************************************ 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:32.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1766991 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1766991 ']' 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1766991 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1766991 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1766991' 00:07:32.774 killing process with pid 1766991 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1766991 00:07:32.774 20:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1766991 00:07:33.034 20:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:33.034 00:07:33.034 real 0m10.947s 00:07:33.034 user 0m43.052s 00:07:33.034 sys 0m1.049s 00:07:33.034 20:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.034 20:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.034 ************************************ 00:07:33.034 END TEST nvmf_filesystem_no_in_capsule 00:07:33.034 ************************************ 00:07:33.034 20:58:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:33.034 20:58:00 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:33.034 20:58:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:33.034 20:58:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.034 20:58:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.034 ************************************ 00:07:33.035 START TEST nvmf_filesystem_in_capsule 00:07:33.035 ************************************ 00:07:33.035 20:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:33.035 20:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:33.035 20:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:33.035 20:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:33.035 20:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:33.035 20:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.035 20:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1769272 00:07:33.035 20:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1769272 00:07:33.035 20:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:33.035 20:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1769272 ']' 00:07:33.035 20:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.035 20:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:33.035 20:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.035 20:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:33.035 20:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.296 [2024-07-15 20:58:00.326725] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:33.296 [2024-07-15 20:58:00.326762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.296 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.296 [2024-07-15 20:58:00.388661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.296 [2024-07-15 20:58:00.453765] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.296 [2024-07-15 20:58:00.453801] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.296 [2024-07-15 20:58:00.453810] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.296 [2024-07-15 20:58:00.453816] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.296 [2024-07-15 20:58:00.453822] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.296 [2024-07-15 20:58:00.453984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.296 [2024-07-15 20:58:00.454098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.296 [2024-07-15 20:58:00.454265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.296 [2024-07-15 20:58:00.454280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.865 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:33.865 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:33.865 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:33.865 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:33.865 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.865 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.865 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:33.865 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:33.865 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.865 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.865 [2024-07-15 20:58:01.147908] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.865 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.865 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.125 Malloc1 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.125 [2024-07-15 20:58:01.275216] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:34.125 { 00:07:34.125 "name": "Malloc1", 00:07:34.125 "aliases": [ 00:07:34.125 "578db949-0a00-4b6e-a222-87d8c7d3a6e3" 00:07:34.125 ], 00:07:34.125 "product_name": "Malloc disk", 00:07:34.125 "block_size": 512, 00:07:34.125 "num_blocks": 1048576, 00:07:34.125 "uuid": "578db949-0a00-4b6e-a222-87d8c7d3a6e3", 00:07:34.125 "assigned_rate_limits": { 00:07:34.125 "rw_ios_per_sec": 0, 00:07:34.125 "rw_mbytes_per_sec": 0, 00:07:34.125 "r_mbytes_per_sec": 0, 00:07:34.125 "w_mbytes_per_sec": 0 00:07:34.125 }, 00:07:34.125 "claimed": true, 00:07:34.125 "claim_type": "exclusive_write", 00:07:34.125 "zoned": false, 00:07:34.125 "supported_io_types": { 00:07:34.125 "read": true, 00:07:34.125 "write": true, 00:07:34.125 "unmap": true, 00:07:34.125 "flush": true, 00:07:34.125 "reset": true, 00:07:34.125 "nvme_admin": false, 00:07:34.125 "nvme_io": false, 00:07:34.125 "nvme_io_md": false, 00:07:34.125 "write_zeroes": true, 00:07:34.125 "zcopy": true, 00:07:34.125 "get_zone_info": false, 00:07:34.125 "zone_management": false, 00:07:34.125 "zone_append": false, 00:07:34.125 "compare": false, 00:07:34.125 "compare_and_write": false, 00:07:34.125 "abort": true, 00:07:34.125 "seek_hole": false, 00:07:34.125 "seek_data": false, 00:07:34.125 "copy": true, 00:07:34.125 "nvme_iov_md": false 00:07:34.125 }, 00:07:34.125 "memory_domains": [ 00:07:34.125 { 00:07:34.125 "dma_device_id": "system", 00:07:34.125 "dma_device_type": 1 00:07:34.125 }, 00:07:34.125 { 00:07:34.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.125 "dma_device_type": 2 00:07:34.125 } 00:07:34.125 ], 00:07:34.125 "driver_specific": {} 00:07:34.125 } 00:07:34.125 ]' 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:34.125 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:34.126 20:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:36.037 20:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:36.037 20:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:36.037 20:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:36.037 20:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:36.037 20:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:37.945 20:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:37.945 20:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:37.945 20:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:37.945 20:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:37.945 20:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:37.945 20:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:37.945 20:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:37.945 20:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:37.945 20:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:37.945 20:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:37.945 20:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:37.945 20:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:37.945 20:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:37.945 20:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:37.945 20:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:37.945 20:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:37.945 20:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:38.206 20:58:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:38.467 20:58:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:39.424 20:58:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:39.424 20:58:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:39.424 20:58:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:39.424 20:58:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.424 20:58:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.424 ************************************ 00:07:39.424 START TEST filesystem_in_capsule_ext4 00:07:39.424 ************************************ 00:07:39.424 20:58:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:39.424 20:58:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:39.424 20:58:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:39.424 20:58:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:39.424 20:58:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:39.424 20:58:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:39.424 20:58:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:39.424 20:58:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:39.424 20:58:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:39.424 20:58:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:39.424 20:58:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:39.424 mke2fs 1.46.5 (30-Dec-2021) 00:07:39.683 Discarding device blocks: 0/522240 done 00:07:39.683 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:39.683 Filesystem UUID: a457fd1f-6a22-47af-b2a3-362ff1e738ec 00:07:39.683 Superblock backups stored on blocks: 00:07:39.683 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:39.683 00:07:39.683 Allocating group tables: 0/64 done 00:07:39.683 Writing inode tables: 0/64 done 00:07:39.943 Creating journal (8192 blocks): done 00:07:40.883 Writing superblocks and filesystem accounting information: 0/64 done 00:07:40.883 00:07:40.883 20:58:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:40.883 20:58:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:40.883 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:40.883 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:40.883 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:40.883 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:40.883 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:40.883 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:40.883 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1769272 00:07:40.883 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:40.883 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:40.883 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:40.883 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:41.143 00:07:41.143 real 0m1.475s 00:07:41.143 user 0m0.024s 00:07:41.143 sys 0m0.051s 00:07:41.143 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.143 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:41.143 ************************************ 00:07:41.143 END TEST filesystem_in_capsule_ext4 00:07:41.143 ************************************ 00:07:41.143 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:41.143 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:41.143 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:41.143 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.143 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.143 ************************************ 00:07:41.143 START TEST filesystem_in_capsule_btrfs 00:07:41.143 ************************************ 00:07:41.143 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:41.143 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:41.143 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.143 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:41.143 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:41.143 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:41.143 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:41.143 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:41.143 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:41.143 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:41.143 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:41.403 btrfs-progs v6.6.2 00:07:41.403 See https://btrfs.readthedocs.io for more information. 00:07:41.403 00:07:41.403 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:41.403 NOTE: several default settings have changed in version 5.15, please make sure 00:07:41.403 this does not affect your deployments: 00:07:41.403 - DUP for metadata (-m dup) 00:07:41.403 - enabled no-holes (-O no-holes) 00:07:41.403 - enabled free-space-tree (-R free-space-tree) 00:07:41.403 00:07:41.403 Label: (null) 00:07:41.403 UUID: dd6f4651-7e3e-4e79-9ed2-e3d5eb000223 00:07:41.403 Node size: 16384 00:07:41.403 Sector size: 4096 00:07:41.403 Filesystem size: 510.00MiB 00:07:41.403 Block group profiles: 00:07:41.403 Data: single 8.00MiB 00:07:41.403 Metadata: DUP 32.00MiB 00:07:41.403 System: DUP 8.00MiB 00:07:41.403 SSD detected: yes 00:07:41.403 Zoned device: no 00:07:41.403 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:41.403 Runtime features: free-space-tree 00:07:41.403 Checksum: crc32c 00:07:41.403 Number of devices: 1 00:07:41.403 Devices: 00:07:41.403 ID SIZE PATH 00:07:41.403 1 510.00MiB /dev/nvme0n1p1 00:07:41.403 00:07:41.403 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:41.403 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1769272 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:41.668 00:07:41.668 real 0m0.541s 00:07:41.668 user 0m0.021s 00:07:41.668 sys 0m0.065s 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:41.668 ************************************ 00:07:41.668 END TEST filesystem_in_capsule_btrfs 00:07:41.668 ************************************ 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.668 ************************************ 00:07:41.668 START TEST filesystem_in_capsule_xfs 00:07:41.668 ************************************ 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:41.668 20:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:41.668 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:41.668 = sectsz=512 attr=2, projid32bit=1 00:07:41.668 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:41.668 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:41.668 data = bsize=4096 blocks=130560, imaxpct=25 00:07:41.668 = sunit=0 swidth=0 blks 00:07:41.668 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:41.668 log =internal log bsize=4096 blocks=16384, version=2 00:07:41.668 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:41.668 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:43.050 Discarding blocks...Done. 00:07:43.051 20:58:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:43.051 20:58:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:44.960 20:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:44.960 20:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:44.960 20:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:44.960 20:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:44.960 20:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:44.960 20:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:44.960 20:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1769272 00:07:44.960 20:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:44.960 20:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:44.960 20:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:44.960 20:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:44.960 00:07:44.960 real 0m3.011s 00:07:44.960 user 0m0.022s 00:07:44.960 sys 0m0.058s 00:07:44.960 20:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.960 20:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:44.960 ************************************ 00:07:44.960 END TEST filesystem_in_capsule_xfs 00:07:44.960 ************************************ 00:07:44.960 20:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:44.960 20:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:44.960 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:44.960 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:45.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1769272 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1769272 ']' 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1769272 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1769272 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1769272' 00:07:45.221 killing process with pid 1769272 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1769272 00:07:45.221 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1769272 00:07:45.483 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:45.483 00:07:45.483 real 0m12.395s 00:07:45.483 user 0m48.836s 00:07:45.483 sys 0m1.086s 00:07:45.483 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.483 20:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.483 ************************************ 00:07:45.483 END TEST nvmf_filesystem_in_capsule 00:07:45.483 ************************************ 00:07:45.483 20:58:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:45.483 20:58:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:45.483 20:58:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:45.483 20:58:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:45.483 20:58:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:45.483 20:58:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:45.483 20:58:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:45.483 20:58:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:45.483 rmmod nvme_tcp 00:07:45.483 rmmod nvme_fabrics 00:07:45.483 rmmod nvme_keyring 00:07:45.744 20:58:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:45.744 20:58:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:45.744 20:58:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:45.744 20:58:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:45.744 20:58:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:45.744 20:58:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:45.744 20:58:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:45.744 20:58:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:45.744 20:58:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:45.744 20:58:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.744 20:58:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.744 20:58:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.655 20:58:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:47.655 00:07:47.655 real 0m34.102s 00:07:47.655 user 1m34.368s 00:07:47.655 sys 0m8.313s 00:07:47.655 20:58:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.655 20:58:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.655 ************************************ 00:07:47.655 END TEST nvmf_filesystem 00:07:47.655 ************************************ 00:07:47.655 20:58:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:47.655 20:58:14 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:47.655 20:58:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:47.655 20:58:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.655 20:58:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:47.655 ************************************ 00:07:47.656 START TEST nvmf_target_discovery 00:07:47.656 ************************************ 00:07:47.656 20:58:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:47.917 * Looking for test storage... 00:07:47.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:47.917 20:58:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:56.063 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:56.063 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:56.063 Found net devices under 0000:31:00.0: cvl_0_0 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:56.063 Found net devices under 0000:31:00.1: cvl_0_1 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.063 20:58:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:56.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.741 ms 00:07:56.063 00:07:56.063 --- 10.0.0.2 ping statistics --- 00:07:56.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.063 rtt min/avg/max/mdev = 0.741/0.741/0.741/0.000 ms 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:07:56.063 00:07:56.063 --- 10.0.0.1 ping statistics --- 00:07:56.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.063 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.063 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1776819 00:07:56.064 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1776819 00:07:56.064 20:58:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:56.064 20:58:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1776819 ']' 00:07:56.064 20:58:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.064 20:58:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.064 20:58:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.064 20:58:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.064 20:58:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.064 [2024-07-15 20:58:23.337323] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:56.064 [2024-07-15 20:58:23.337391] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.324 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.324 [2024-07-15 20:58:23.416991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:56.324 [2024-07-15 20:58:23.491603] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.324 [2024-07-15 20:58:23.491642] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.324 [2024-07-15 20:58:23.491650] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.324 [2024-07-15 20:58:23.491656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.324 [2024-07-15 20:58:23.491662] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.324 [2024-07-15 20:58:23.491811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.324 [2024-07-15 20:58:23.491924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.324 [2024-07-15 20:58:23.492080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.324 [2024-07-15 20:58:23.492082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.897 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.897 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:56.897 20:58:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:56.897 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:56.897 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.897 20:58:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.897 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:56.897 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.897 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.897 [2024-07-15 20:58:24.156916] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.897 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.897 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:56.897 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:56.897 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:56.897 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.897 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.897 Null1 00:07:56.897 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.897 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:56.897 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.897 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.158 [2024-07-15 20:58:24.214734] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.158 Null2 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.158 Null3 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.158 Null4 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.158 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:07:57.420 00:07:57.420 Discovery Log Number of Records 6, Generation counter 6 00:07:57.420 =====Discovery Log Entry 0====== 00:07:57.420 trtype: tcp 00:07:57.420 adrfam: ipv4 00:07:57.420 subtype: current discovery subsystem 00:07:57.420 treq: not required 00:07:57.420 portid: 0 00:07:57.420 trsvcid: 4420 00:07:57.420 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:57.420 traddr: 10.0.0.2 00:07:57.420 eflags: explicit discovery connections, duplicate discovery information 00:07:57.420 sectype: none 00:07:57.420 =====Discovery Log Entry 1====== 00:07:57.420 trtype: tcp 00:07:57.420 adrfam: ipv4 00:07:57.420 subtype: nvme subsystem 00:07:57.420 treq: not required 00:07:57.420 portid: 0 00:07:57.420 trsvcid: 4420 00:07:57.420 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:57.420 traddr: 10.0.0.2 00:07:57.420 eflags: none 00:07:57.420 sectype: none 00:07:57.420 =====Discovery Log Entry 2====== 00:07:57.420 trtype: tcp 00:07:57.420 adrfam: ipv4 00:07:57.420 subtype: nvme subsystem 00:07:57.420 treq: not required 00:07:57.420 portid: 0 00:07:57.420 trsvcid: 4420 00:07:57.420 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:57.420 traddr: 10.0.0.2 00:07:57.420 eflags: none 00:07:57.420 sectype: none 00:07:57.420 =====Discovery Log Entry 3====== 00:07:57.420 trtype: tcp 00:07:57.420 adrfam: ipv4 00:07:57.420 subtype: nvme subsystem 00:07:57.420 treq: not required 00:07:57.420 portid: 0 00:07:57.420 trsvcid: 4420 00:07:57.420 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:57.420 traddr: 10.0.0.2 00:07:57.420 eflags: none 00:07:57.420 sectype: none 00:07:57.420 =====Discovery Log Entry 4====== 00:07:57.420 trtype: tcp 00:07:57.420 adrfam: ipv4 00:07:57.420 subtype: nvme subsystem 00:07:57.420 treq: not required 00:07:57.420 portid: 0 00:07:57.420 trsvcid: 4420 00:07:57.420 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:57.420 traddr: 10.0.0.2 00:07:57.420 eflags: none 00:07:57.420 sectype: none 00:07:57.420 =====Discovery Log Entry 5====== 00:07:57.420 trtype: tcp 00:07:57.420 adrfam: ipv4 00:07:57.420 subtype: discovery subsystem referral 00:07:57.420 treq: not required 00:07:57.420 portid: 0 00:07:57.420 trsvcid: 4430 00:07:57.420 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:57.420 traddr: 10.0.0.2 00:07:57.420 eflags: none 00:07:57.420 sectype: none 00:07:57.420 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:57.420 Perform nvmf subsystem discovery via RPC 00:07:57.420 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:57.420 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.420 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.420 [ 00:07:57.420 { 00:07:57.420 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:57.420 "subtype": "Discovery", 00:07:57.420 "listen_addresses": [ 00:07:57.420 { 00:07:57.420 "trtype": "TCP", 00:07:57.420 "adrfam": "IPv4", 00:07:57.420 "traddr": "10.0.0.2", 00:07:57.420 "trsvcid": "4420" 00:07:57.420 } 00:07:57.420 ], 00:07:57.420 "allow_any_host": true, 00:07:57.420 "hosts": [] 00:07:57.420 }, 00:07:57.420 { 00:07:57.420 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:57.421 "subtype": "NVMe", 00:07:57.421 "listen_addresses": [ 00:07:57.421 { 00:07:57.421 "trtype": "TCP", 00:07:57.421 "adrfam": "IPv4", 00:07:57.421 "traddr": "10.0.0.2", 00:07:57.421 "trsvcid": "4420" 00:07:57.421 } 00:07:57.421 ], 00:07:57.421 "allow_any_host": true, 00:07:57.421 "hosts": [], 00:07:57.421 "serial_number": "SPDK00000000000001", 00:07:57.421 "model_number": "SPDK bdev Controller", 00:07:57.421 "max_namespaces": 32, 00:07:57.421 "min_cntlid": 1, 00:07:57.421 "max_cntlid": 65519, 00:07:57.421 "namespaces": [ 00:07:57.421 { 00:07:57.421 "nsid": 1, 00:07:57.421 "bdev_name": "Null1", 00:07:57.421 "name": "Null1", 00:07:57.421 "nguid": "CD115EB63033475E8CA078CF55289501", 00:07:57.421 "uuid": "cd115eb6-3033-475e-8ca0-78cf55289501" 00:07:57.421 } 00:07:57.421 ] 00:07:57.421 }, 00:07:57.421 { 00:07:57.421 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:57.421 "subtype": "NVMe", 00:07:57.421 "listen_addresses": [ 00:07:57.421 { 00:07:57.421 "trtype": "TCP", 00:07:57.421 "adrfam": "IPv4", 00:07:57.421 "traddr": "10.0.0.2", 00:07:57.421 "trsvcid": "4420" 00:07:57.421 } 00:07:57.421 ], 00:07:57.421 "allow_any_host": true, 00:07:57.421 "hosts": [], 00:07:57.421 "serial_number": "SPDK00000000000002", 00:07:57.421 "model_number": "SPDK bdev Controller", 00:07:57.421 "max_namespaces": 32, 00:07:57.421 "min_cntlid": 1, 00:07:57.421 "max_cntlid": 65519, 00:07:57.421 "namespaces": [ 00:07:57.421 { 00:07:57.421 "nsid": 1, 00:07:57.421 "bdev_name": "Null2", 00:07:57.421 "name": "Null2", 00:07:57.421 "nguid": "726BA5BA3344423D8F6A8E5373060A14", 00:07:57.421 "uuid": "726ba5ba-3344-423d-8f6a-8e5373060a14" 00:07:57.421 } 00:07:57.421 ] 00:07:57.421 }, 00:07:57.421 { 00:07:57.421 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:57.421 "subtype": "NVMe", 00:07:57.421 "listen_addresses": [ 00:07:57.421 { 00:07:57.421 "trtype": "TCP", 00:07:57.421 "adrfam": "IPv4", 00:07:57.421 "traddr": "10.0.0.2", 00:07:57.421 "trsvcid": "4420" 00:07:57.421 } 00:07:57.421 ], 00:07:57.421 "allow_any_host": true, 00:07:57.421 "hosts": [], 00:07:57.421 "serial_number": "SPDK00000000000003", 00:07:57.421 "model_number": "SPDK bdev Controller", 00:07:57.421 "max_namespaces": 32, 00:07:57.421 "min_cntlid": 1, 00:07:57.421 "max_cntlid": 65519, 00:07:57.421 "namespaces": [ 00:07:57.421 { 00:07:57.421 "nsid": 1, 00:07:57.421 "bdev_name": "Null3", 00:07:57.421 "name": "Null3", 00:07:57.421 "nguid": "D49212174F6B40BDB42106C8F21DC33D", 00:07:57.421 "uuid": "d4921217-4f6b-40bd-b421-06c8f21dc33d" 00:07:57.421 } 00:07:57.421 ] 00:07:57.421 }, 00:07:57.421 { 00:07:57.421 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:57.421 "subtype": "NVMe", 00:07:57.421 "listen_addresses": [ 00:07:57.421 { 00:07:57.421 "trtype": "TCP", 00:07:57.421 "adrfam": "IPv4", 00:07:57.421 "traddr": "10.0.0.2", 00:07:57.421 "trsvcid": "4420" 00:07:57.421 } 00:07:57.421 ], 00:07:57.421 "allow_any_host": true, 00:07:57.421 "hosts": [], 00:07:57.421 "serial_number": "SPDK00000000000004", 00:07:57.421 "model_number": "SPDK bdev Controller", 00:07:57.421 "max_namespaces": 32, 00:07:57.421 "min_cntlid": 1, 00:07:57.421 "max_cntlid": 65519, 00:07:57.421 "namespaces": [ 00:07:57.421 { 00:07:57.421 "nsid": 1, 00:07:57.421 "bdev_name": "Null4", 00:07:57.421 "name": "Null4", 00:07:57.421 "nguid": "5790A3B761D34C67851BE90C16A5689B", 00:07:57.421 "uuid": "5790a3b7-61d3-4c67-851b-e90c16a5689b" 00:07:57.421 } 00:07:57.421 ] 00:07:57.421 } 00:07:57.421 ] 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:57.421 rmmod nvme_tcp 00:07:57.421 rmmod nvme_fabrics 00:07:57.421 rmmod nvme_keyring 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1776819 ']' 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1776819 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1776819 ']' 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1776819 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:57.421 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1776819 00:07:57.683 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:57.683 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:57.683 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1776819' 00:07:57.683 killing process with pid 1776819 00:07:57.683 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1776819 00:07:57.683 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1776819 00:07:57.683 20:58:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:57.683 20:58:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:57.683 20:58:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:57.683 20:58:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:57.683 20:58:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:57.683 20:58:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.683 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.683 20:58:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.231 20:58:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:00.231 00:08:00.231 real 0m12.033s 00:08:00.231 user 0m8.113s 00:08:00.231 sys 0m6.376s 00:08:00.231 20:58:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.231 20:58:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.231 ************************************ 00:08:00.231 END TEST nvmf_target_discovery 00:08:00.231 ************************************ 00:08:00.231 20:58:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:00.231 20:58:26 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:00.231 20:58:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:00.231 20:58:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.231 20:58:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:00.231 ************************************ 00:08:00.231 START TEST nvmf_referrals 00:08:00.231 ************************************ 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:00.231 * Looking for test storage... 00:08:00.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:00.231 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:00.232 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.232 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:00.232 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:00.232 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:00.232 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.232 20:58:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.232 20:58:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.232 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:00.232 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:00.232 20:58:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:00.232 20:58:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.390 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.390 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:08.390 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:08.390 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:08.390 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:08.390 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:08.390 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:08.390 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:08.390 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:08.390 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:08.390 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:08.391 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:08.391 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:08.391 Found net devices under 0000:31:00.0: cvl_0_0 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:08.391 Found net devices under 0000:31:00.1: cvl_0_1 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:08.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.543 ms 00:08:08.391 00:08:08.391 --- 10.0.0.2 ping statistics --- 00:08:08.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.391 rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:08:08.391 00:08:08.391 --- 10.0.0.1 ping statistics --- 00:08:08.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.391 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1781842 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1781842 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1781842 ']' 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.391 20:58:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.391 [2024-07-15 20:58:35.448469] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:08.391 [2024-07-15 20:58:35.448534] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.391 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.391 [2024-07-15 20:58:35.528058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.391 [2024-07-15 20:58:35.602349] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.391 [2024-07-15 20:58:35.602389] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.391 [2024-07-15 20:58:35.602397] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.391 [2024-07-15 20:58:35.602403] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.391 [2024-07-15 20:58:35.602409] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.391 [2024-07-15 20:58:35.602568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.391 [2024-07-15 20:58:35.602692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.391 [2024-07-15 20:58:35.602847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.391 [2024-07-15 20:58:35.602848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:08.994 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:08.994 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:08.994 20:58:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:08.994 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:08.994 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.280 [2024-07-15 20:58:36.279852] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.280 [2024-07-15 20:58:36.293557] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:09.280 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.542 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:09.803 20:58:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:09.803 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:09.803 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:09.803 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:09.803 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:09.803 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:09.803 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.803 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:10.065 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:10.326 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.326 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:10.327 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:10.327 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:10.327 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:10.327 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:10.327 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:10.327 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:10.327 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.327 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:10.327 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:10.327 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:10.327 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:10.327 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:10.327 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.327 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:10.327 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:10.327 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:10.327 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.327 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:10.327 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:10.588 rmmod nvme_tcp 00:08:10.588 rmmod nvme_fabrics 00:08:10.588 rmmod nvme_keyring 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1781842 ']' 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1781842 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1781842 ']' 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1781842 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:10.588 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1781842 00:08:10.850 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:10.850 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:10.850 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1781842' 00:08:10.850 killing process with pid 1781842 00:08:10.850 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1781842 00:08:10.850 20:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1781842 00:08:10.850 20:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:10.850 20:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:10.850 20:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:10.850 20:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:10.850 20:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:10.850 20:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.850 20:58:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.850 20:58:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.392 20:58:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:13.392 00:08:13.392 real 0m13.055s 00:08:13.392 user 0m12.870s 00:08:13.392 sys 0m6.637s 00:08:13.392 20:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.392 20:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.392 ************************************ 00:08:13.392 END TEST nvmf_referrals 00:08:13.392 ************************************ 00:08:13.392 20:58:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:13.392 20:58:40 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:13.392 20:58:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:13.392 20:58:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.392 20:58:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:13.392 ************************************ 00:08:13.392 START TEST nvmf_connect_disconnect 00:08:13.392 ************************************ 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:13.392 * Looking for test storage... 00:08:13.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.392 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:13.393 20:58:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:21.550 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:21.550 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:21.550 Found net devices under 0000:31:00.0: cvl_0_0 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:21.550 Found net devices under 0000:31:00.1: cvl_0_1 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:21.550 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:21.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:08:21.551 00:08:21.551 --- 10.0.0.2 ping statistics --- 00:08:21.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.551 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:08:21.551 00:08:21.551 --- 10.0.0.1 ping statistics --- 00:08:21.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.551 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1787009 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1787009 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1787009 ']' 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.551 20:58:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.551 [2024-07-15 20:58:48.475683] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:21.551 [2024-07-15 20:58:48.475751] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.551 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.551 [2024-07-15 20:58:48.554761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.551 [2024-07-15 20:58:48.630248] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.551 [2024-07-15 20:58:48.630286] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.551 [2024-07-15 20:58:48.630294] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.551 [2024-07-15 20:58:48.630300] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.551 [2024-07-15 20:58:48.630305] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.551 [2024-07-15 20:58:48.630543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.551 [2024-07-15 20:58:48.630718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.551 [2024-07-15 20:58:48.630875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.551 [2024-07-15 20:58:48.630876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:22.123 [2024-07-15 20:58:49.285804] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:22.123 [2024-07-15 20:58:49.345226] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:22.123 20:58:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:26.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:40.429 rmmod nvme_tcp 00:08:40.429 rmmod nvme_fabrics 00:08:40.429 rmmod nvme_keyring 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1787009 ']' 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1787009 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1787009 ']' 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1787009 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1787009 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1787009' 00:08:40.429 killing process with pid 1787009 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1787009 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1787009 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.429 20:59:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.976 20:59:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:42.976 00:08:42.976 real 0m29.493s 00:08:42.976 user 1m17.766s 00:08:42.976 sys 0m7.132s 00:08:42.976 20:59:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.976 20:59:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.976 ************************************ 00:08:42.976 END TEST nvmf_connect_disconnect 00:08:42.976 ************************************ 00:08:42.976 20:59:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:42.976 20:59:09 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:42.976 20:59:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:42.976 20:59:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.976 20:59:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:42.976 ************************************ 00:08:42.976 START TEST nvmf_multitarget 00:08:42.976 ************************************ 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:42.976 * Looking for test storage... 00:08:42.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:42.976 20:59:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:51.120 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:51.120 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:51.120 Found net devices under 0000:31:00.0: cvl_0_0 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:51.120 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:51.121 Found net devices under 0000:31:00.1: cvl_0_1 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:51.121 20:59:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:51.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:08:51.121 00:08:51.121 --- 10.0.0.2 ping statistics --- 00:08:51.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.121 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:08:51.121 00:08:51.121 --- 10.0.0.1 ping statistics --- 00:08:51.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.121 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1795529 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1795529 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1795529 ']' 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:51.121 20:59:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:51.121 [2024-07-15 20:59:18.153497] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:51.121 [2024-07-15 20:59:18.153559] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.121 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.121 [2024-07-15 20:59:18.232675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.121 [2024-07-15 20:59:18.307111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.121 [2024-07-15 20:59:18.307151] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.121 [2024-07-15 20:59:18.307159] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.121 [2024-07-15 20:59:18.307166] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.121 [2024-07-15 20:59:18.307171] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.121 [2024-07-15 20:59:18.307310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.121 [2024-07-15 20:59:18.307375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.121 [2024-07-15 20:59:18.307538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.121 [2024-07-15 20:59:18.307539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.692 20:59:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:51.692 20:59:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:51.692 20:59:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:51.692 20:59:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:51.692 20:59:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:51.692 20:59:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.692 20:59:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:51.953 20:59:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:51.953 20:59:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:51.953 20:59:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:51.953 20:59:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:51.953 "nvmf_tgt_1" 00:08:51.953 20:59:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:52.215 "nvmf_tgt_2" 00:08:52.215 20:59:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:52.215 20:59:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:52.215 20:59:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:52.215 20:59:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:52.215 true 00:08:52.215 20:59:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:52.476 true 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:52.476 rmmod nvme_tcp 00:08:52.476 rmmod nvme_fabrics 00:08:52.476 rmmod nvme_keyring 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1795529 ']' 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1795529 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1795529 ']' 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1795529 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:52.476 20:59:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1795529 00:08:52.737 20:59:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:52.737 20:59:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:52.737 20:59:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1795529' 00:08:52.737 killing process with pid 1795529 00:08:52.737 20:59:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1795529 00:08:52.737 20:59:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1795529 00:08:52.737 20:59:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:52.737 20:59:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:52.737 20:59:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:52.737 20:59:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:52.737 20:59:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:52.737 20:59:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.737 20:59:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.737 20:59:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.284 20:59:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:55.284 00:08:55.284 real 0m12.268s 00:08:55.284 user 0m9.594s 00:08:55.284 sys 0m6.457s 00:08:55.284 20:59:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.284 20:59:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:55.284 ************************************ 00:08:55.284 END TEST nvmf_multitarget 00:08:55.284 ************************************ 00:08:55.284 20:59:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:55.284 20:59:22 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:55.284 20:59:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:55.284 20:59:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.284 20:59:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:55.284 ************************************ 00:08:55.284 START TEST nvmf_rpc 00:08:55.284 ************************************ 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:55.284 * Looking for test storage... 00:08:55.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:55.284 20:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.431 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:03.431 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:03.431 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:03.431 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:03.431 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:03.431 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:03.431 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:03.431 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:03.431 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:03.431 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:03.431 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:03.431 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:03.431 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:03.431 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:03.431 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:03.431 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:03.431 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:03.431 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:03.431 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:03.431 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:03.432 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:03.432 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:03.432 Found net devices under 0000:31:00.0: cvl_0_0 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:03.432 Found net devices under 0000:31:00.1: cvl_0_1 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:03.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:09:03.432 00:09:03.432 --- 10.0.0.2 ping statistics --- 00:09:03.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.432 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:03.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:09:03.432 00:09:03.432 --- 10.0.0.1 ping statistics --- 00:09:03.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.432 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1800659 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1800659 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1800659 ']' 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:03.432 20:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.432 [2024-07-15 20:59:30.467569] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:09:03.432 [2024-07-15 20:59:30.467642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.432 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.432 [2024-07-15 20:59:30.549628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.432 [2024-07-15 20:59:30.625876] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.432 [2024-07-15 20:59:30.625916] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.432 [2024-07-15 20:59:30.625923] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.432 [2024-07-15 20:59:30.625930] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.432 [2024-07-15 20:59:30.625936] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.432 [2024-07-15 20:59:30.626095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.432 [2024-07-15 20:59:30.626217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.432 [2024-07-15 20:59:30.626408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.432 [2024-07-15 20:59:30.626523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.004 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.004 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:04.004 20:59:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:04.004 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.004 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.004 20:59:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:04.264 "tick_rate": 2400000000, 00:09:04.264 "poll_groups": [ 00:09:04.264 { 00:09:04.264 "name": "nvmf_tgt_poll_group_000", 00:09:04.264 "admin_qpairs": 0, 00:09:04.264 "io_qpairs": 0, 00:09:04.264 "current_admin_qpairs": 0, 00:09:04.264 "current_io_qpairs": 0, 00:09:04.264 "pending_bdev_io": 0, 00:09:04.264 "completed_nvme_io": 0, 00:09:04.264 "transports": [] 00:09:04.264 }, 00:09:04.264 { 00:09:04.264 "name": "nvmf_tgt_poll_group_001", 00:09:04.264 "admin_qpairs": 0, 00:09:04.264 "io_qpairs": 0, 00:09:04.264 "current_admin_qpairs": 0, 00:09:04.264 "current_io_qpairs": 0, 00:09:04.264 "pending_bdev_io": 0, 00:09:04.264 "completed_nvme_io": 0, 00:09:04.264 "transports": [] 00:09:04.264 }, 00:09:04.264 { 00:09:04.264 "name": "nvmf_tgt_poll_group_002", 00:09:04.264 "admin_qpairs": 0, 00:09:04.264 "io_qpairs": 0, 00:09:04.264 "current_admin_qpairs": 0, 00:09:04.264 "current_io_qpairs": 0, 00:09:04.264 "pending_bdev_io": 0, 00:09:04.264 "completed_nvme_io": 0, 00:09:04.264 "transports": [] 00:09:04.264 }, 00:09:04.264 { 00:09:04.264 "name": "nvmf_tgt_poll_group_003", 00:09:04.264 "admin_qpairs": 0, 00:09:04.264 "io_qpairs": 0, 00:09:04.264 "current_admin_qpairs": 0, 00:09:04.264 "current_io_qpairs": 0, 00:09:04.264 "pending_bdev_io": 0, 00:09:04.264 "completed_nvme_io": 0, 00:09:04.264 "transports": [] 00:09:04.264 } 00:09:04.264 ] 00:09:04.264 }' 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.264 [2024-07-15 20:59:31.417152] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.264 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:04.264 "tick_rate": 2400000000, 00:09:04.264 "poll_groups": [ 00:09:04.264 { 00:09:04.264 "name": "nvmf_tgt_poll_group_000", 00:09:04.264 "admin_qpairs": 0, 00:09:04.264 "io_qpairs": 0, 00:09:04.264 "current_admin_qpairs": 0, 00:09:04.264 "current_io_qpairs": 0, 00:09:04.264 "pending_bdev_io": 0, 00:09:04.264 "completed_nvme_io": 0, 00:09:04.264 "transports": [ 00:09:04.264 { 00:09:04.264 "trtype": "TCP" 00:09:04.264 } 00:09:04.264 ] 00:09:04.264 }, 00:09:04.264 { 00:09:04.264 "name": "nvmf_tgt_poll_group_001", 00:09:04.264 "admin_qpairs": 0, 00:09:04.264 "io_qpairs": 0, 00:09:04.264 "current_admin_qpairs": 0, 00:09:04.264 "current_io_qpairs": 0, 00:09:04.264 "pending_bdev_io": 0, 00:09:04.264 "completed_nvme_io": 0, 00:09:04.264 "transports": [ 00:09:04.264 { 00:09:04.264 "trtype": "TCP" 00:09:04.264 } 00:09:04.264 ] 00:09:04.264 }, 00:09:04.264 { 00:09:04.264 "name": "nvmf_tgt_poll_group_002", 00:09:04.264 "admin_qpairs": 0, 00:09:04.264 "io_qpairs": 0, 00:09:04.264 "current_admin_qpairs": 0, 00:09:04.264 "current_io_qpairs": 0, 00:09:04.264 "pending_bdev_io": 0, 00:09:04.264 "completed_nvme_io": 0, 00:09:04.264 "transports": [ 00:09:04.264 { 00:09:04.264 "trtype": "TCP" 00:09:04.264 } 00:09:04.264 ] 00:09:04.265 }, 00:09:04.265 { 00:09:04.265 "name": "nvmf_tgt_poll_group_003", 00:09:04.265 "admin_qpairs": 0, 00:09:04.265 "io_qpairs": 0, 00:09:04.265 "current_admin_qpairs": 0, 00:09:04.265 "current_io_qpairs": 0, 00:09:04.265 "pending_bdev_io": 0, 00:09:04.265 "completed_nvme_io": 0, 00:09:04.265 "transports": [ 00:09:04.265 { 00:09:04.265 "trtype": "TCP" 00:09:04.265 } 00:09:04.265 ] 00:09:04.265 } 00:09:04.265 ] 00:09:04.265 }' 00:09:04.265 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:04.265 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:04.265 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:04.265 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:04.265 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:04.265 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:04.265 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:04.265 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:04.265 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:04.265 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:04.265 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:04.265 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:04.265 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:04.265 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:04.265 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.265 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.525 Malloc1 00:09:04.525 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.525 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:04.525 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.525 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.525 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.525 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:04.525 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.525 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.525 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.526 [2024-07-15 20:59:31.606508] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:04.526 [2024-07-15 20:59:31.633209] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:09:04.526 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:04.526 could not add new controller: failed to write to nvme-fabrics device 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.526 20:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:06.019 20:59:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:06.019 20:59:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:06.019 20:59:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:06.019 20:59:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:06.019 20:59:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:07.930 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:07.930 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:07.930 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.930 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:07.930 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.930 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:07.931 20:59:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:07.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.931 20:59:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:07.931 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:07.931 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:07.931 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.192 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:08.192 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.192 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:08.192 20:59:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:08.192 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.192 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.192 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.192 20:59:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:08.192 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:08.192 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:08.192 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:08.192 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.192 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:08.192 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.192 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:08.192 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.192 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:08.192 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:08.192 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:08.192 [2024-07-15 20:59:35.281197] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:09:08.192 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:08.192 could not add new controller: failed to write to nvme-fabrics device 00:09:08.193 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:08.193 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:08.193 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:08.193 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:08.193 20:59:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:08.193 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.193 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.193 20:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.193 20:59:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:09.578 20:59:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:09.578 20:59:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:09.578 20:59:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:09.578 20:59:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:09.578 20:59:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:12.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.122 [2024-07-15 20:59:38.981599] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.122 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.123 20:59:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:12.123 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.123 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.123 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.123 20:59:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:12.123 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.123 20:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.123 20:59:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.123 20:59:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:13.507 20:59:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:13.507 20:59:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:13.507 20:59:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:13.507 20:59:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:13.507 20:59:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:15.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.419 [2024-07-15 20:59:42.692315] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.419 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.680 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.680 20:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:15.680 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.680 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.680 20:59:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.680 20:59:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:17.061 20:59:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:17.061 20:59:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:17.061 20:59:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:17.061 20:59:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:17.061 20:59:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:18.969 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:18.969 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:18.969 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:18.969 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:18.969 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:18.969 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:18.969 20:59:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:19.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.229 20:59:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:19.229 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:19.229 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:19.229 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.229 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:19.229 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.230 [2024-07-15 20:59:46.368813] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.230 20:59:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:21.140 20:59:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:21.140 20:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:21.140 20:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:21.140 20:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:21.140 20:59:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:23.049 20:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:23.049 20:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:23.049 20:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:23.049 20:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:23.049 20:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:23.049 20:59:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:23.049 20:59:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.049 20:59:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:23.049 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:23.049 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:23.049 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.049 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:23.049 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.049 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:23.049 20:59:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:23.049 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.049 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.049 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.050 [2024-07-15 20:59:50.187665] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.050 20:59:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:24.432 20:59:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:24.432 20:59:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:24.432 20:59:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:24.432 20:59:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:24.433 20:59:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:26.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.975 [2024-07-15 20:59:53.851224] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.975 20:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:28.360 20:59:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:28.360 20:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:28.360 20:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:28.360 20:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:28.360 20:59:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:30.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.275 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.537 [2024-07-15 20:59:57.577908] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.537 [2024-07-15 20:59:57.638012] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.537 [2024-07-15 20:59:57.702212] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.537 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.538 [2024-07-15 20:59:57.758387] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.538 [2024-07-15 20:59:57.814561] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.538 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.799 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.799 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.799 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.799 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.799 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.799 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.799 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.799 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.799 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.799 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.799 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.799 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.799 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.799 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:30.799 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.799 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.799 20:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.799 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:30.799 "tick_rate": 2400000000, 00:09:30.799 "poll_groups": [ 00:09:30.799 { 00:09:30.799 "name": "nvmf_tgt_poll_group_000", 00:09:30.799 "admin_qpairs": 0, 00:09:30.799 "io_qpairs": 224, 00:09:30.799 "current_admin_qpairs": 0, 00:09:30.799 "current_io_qpairs": 0, 00:09:30.799 "pending_bdev_io": 0, 00:09:30.799 "completed_nvme_io": 524, 00:09:30.799 "transports": [ 00:09:30.799 { 00:09:30.799 "trtype": "TCP" 00:09:30.799 } 00:09:30.799 ] 00:09:30.799 }, 00:09:30.799 { 00:09:30.799 "name": "nvmf_tgt_poll_group_001", 00:09:30.799 "admin_qpairs": 1, 00:09:30.799 "io_qpairs": 223, 00:09:30.799 "current_admin_qpairs": 0, 00:09:30.799 "current_io_qpairs": 0, 00:09:30.799 "pending_bdev_io": 0, 00:09:30.799 "completed_nvme_io": 223, 00:09:30.799 "transports": [ 00:09:30.799 { 00:09:30.799 "trtype": "TCP" 00:09:30.799 } 00:09:30.799 ] 00:09:30.799 }, 00:09:30.799 { 00:09:30.799 "name": "nvmf_tgt_poll_group_002", 00:09:30.799 "admin_qpairs": 6, 00:09:30.799 "io_qpairs": 218, 00:09:30.799 "current_admin_qpairs": 0, 00:09:30.799 "current_io_qpairs": 0, 00:09:30.799 "pending_bdev_io": 0, 00:09:30.799 "completed_nvme_io": 218, 00:09:30.799 "transports": [ 00:09:30.799 { 00:09:30.799 "trtype": "TCP" 00:09:30.799 } 00:09:30.799 ] 00:09:30.799 }, 00:09:30.799 { 00:09:30.799 "name": "nvmf_tgt_poll_group_003", 00:09:30.800 "admin_qpairs": 0, 00:09:30.800 "io_qpairs": 224, 00:09:30.800 "current_admin_qpairs": 0, 00:09:30.800 "current_io_qpairs": 0, 00:09:30.800 "pending_bdev_io": 0, 00:09:30.800 "completed_nvme_io": 274, 00:09:30.800 "transports": [ 00:09:30.800 { 00:09:30.800 "trtype": "TCP" 00:09:30.800 } 00:09:30.800 ] 00:09:30.800 } 00:09:30.800 ] 00:09:30.800 }' 00:09:30.800 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:30.800 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:30.800 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:30.800 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:30.800 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:30.800 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:30.800 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:30.800 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:30.800 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:30.800 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:30.800 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:30.800 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:30.800 20:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:30.800 20:59:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:30.800 20:59:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:30.800 20:59:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:30.800 20:59:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:30.800 20:59:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:30.800 20:59:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:30.800 rmmod nvme_tcp 00:09:30.800 rmmod nvme_fabrics 00:09:30.800 rmmod nvme_keyring 00:09:30.800 20:59:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:30.800 20:59:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:30.800 20:59:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:30.800 20:59:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1800659 ']' 00:09:30.800 20:59:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1800659 00:09:30.800 20:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1800659 ']' 00:09:30.800 20:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1800659 00:09:30.800 20:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:30.800 20:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:30.800 20:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1800659 00:09:31.085 20:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:31.085 20:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:31.085 20:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1800659' 00:09:31.085 killing process with pid 1800659 00:09:31.085 20:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1800659 00:09:31.085 20:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1800659 00:09:31.085 20:59:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:31.085 20:59:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:31.085 20:59:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:31.085 20:59:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:31.085 20:59:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:31.085 20:59:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.085 20:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.085 20:59:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.633 21:00:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:33.633 00:09:33.633 real 0m38.236s 00:09:33.633 user 1m52.807s 00:09:33.633 sys 0m7.605s 00:09:33.633 21:00:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.633 21:00:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.633 ************************************ 00:09:33.633 END TEST nvmf_rpc 00:09:33.633 ************************************ 00:09:33.633 21:00:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:33.633 21:00:00 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:33.633 21:00:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:33.633 21:00:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.633 21:00:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.633 ************************************ 00:09:33.633 START TEST nvmf_invalid 00:09:33.633 ************************************ 00:09:33.633 21:00:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:33.633 * Looking for test storage... 00:09:33.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.633 21:00:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.633 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:33.633 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.633 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.633 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.633 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.633 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.633 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.633 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.633 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.633 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.633 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.633 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:33.633 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:33.633 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.633 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:33.634 21:00:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:41.771 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:41.771 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:41.771 Found net devices under 0000:31:00.0: cvl_0_0 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.771 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:41.771 Found net devices under 0000:31:00.1: cvl_0_1 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:41.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.785 ms 00:09:41.772 00:09:41.772 --- 10.0.0.2 ping statistics --- 00:09:41.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.772 rtt min/avg/max/mdev = 0.785/0.785/0.785/0.000 ms 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:09:41.772 00:09:41.772 --- 10.0.0.1 ping statistics --- 00:09:41.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.772 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1811373 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1811373 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1811373 ']' 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:41.772 21:00:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:41.772 [2024-07-15 21:00:08.932678] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:09:41.772 [2024-07-15 21:00:08.932721] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.772 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.772 [2024-07-15 21:00:08.997459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.032 [2024-07-15 21:00:09.064016] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.032 [2024-07-15 21:00:09.064051] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.032 [2024-07-15 21:00:09.064059] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.032 [2024-07-15 21:00:09.064066] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.032 [2024-07-15 21:00:09.064071] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.032 [2024-07-15 21:00:09.064215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.032 [2024-07-15 21:00:09.064242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.032 [2024-07-15 21:00:09.064419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.032 [2024-07-15 21:00:09.064420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.602 21:00:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:42.602 21:00:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:42.602 21:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:42.602 21:00:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:42.602 21:00:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:42.602 21:00:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.602 21:00:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:42.602 21:00:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode16454 00:09:42.863 [2024-07-15 21:00:09.915292] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:42.863 21:00:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:42.863 { 00:09:42.863 "nqn": "nqn.2016-06.io.spdk:cnode16454", 00:09:42.863 "tgt_name": "foobar", 00:09:42.863 "method": "nvmf_create_subsystem", 00:09:42.863 "req_id": 1 00:09:42.863 } 00:09:42.863 Got JSON-RPC error response 00:09:42.863 response: 00:09:42.863 { 00:09:42.863 "code": -32603, 00:09:42.863 "message": "Unable to find target foobar" 00:09:42.863 }' 00:09:42.863 21:00:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:42.863 { 00:09:42.863 "nqn": "nqn.2016-06.io.spdk:cnode16454", 00:09:42.863 "tgt_name": "foobar", 00:09:42.863 "method": "nvmf_create_subsystem", 00:09:42.863 "req_id": 1 00:09:42.863 } 00:09:42.863 Got JSON-RPC error response 00:09:42.863 response: 00:09:42.863 { 00:09:42.863 "code": -32603, 00:09:42.863 "message": "Unable to find target foobar" 00:09:42.863 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:42.864 21:00:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:42.864 21:00:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode4139 00:09:42.864 [2024-07-15 21:00:10.095893] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4139: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:42.864 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:42.864 { 00:09:42.864 "nqn": "nqn.2016-06.io.spdk:cnode4139", 00:09:42.864 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:42.864 "method": "nvmf_create_subsystem", 00:09:42.864 "req_id": 1 00:09:42.864 } 00:09:42.864 Got JSON-RPC error response 00:09:42.864 response: 00:09:42.864 { 00:09:42.864 "code": -32602, 00:09:42.864 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:42.864 }' 00:09:42.864 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:42.864 { 00:09:42.864 "nqn": "nqn.2016-06.io.spdk:cnode4139", 00:09:42.864 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:42.864 "method": "nvmf_create_subsystem", 00:09:42.864 "req_id": 1 00:09:42.864 } 00:09:42.864 Got JSON-RPC error response 00:09:42.864 response: 00:09:42.864 { 00:09:42.864 "code": -32602, 00:09:42.864 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:42.864 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:42.864 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:42.864 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode1202 00:09:43.125 [2024-07-15 21:00:10.276446] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1202: invalid model number 'SPDK_Controller' 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:43.125 { 00:09:43.125 "nqn": "nqn.2016-06.io.spdk:cnode1202", 00:09:43.125 "model_number": "SPDK_Controller\u001f", 00:09:43.125 "method": "nvmf_create_subsystem", 00:09:43.125 "req_id": 1 00:09:43.125 } 00:09:43.125 Got JSON-RPC error response 00:09:43.125 response: 00:09:43.125 { 00:09:43.125 "code": -32602, 00:09:43.125 "message": "Invalid MN SPDK_Controller\u001f" 00:09:43.125 }' 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:43.125 { 00:09:43.125 "nqn": "nqn.2016-06.io.spdk:cnode1202", 00:09:43.125 "model_number": "SPDK_Controller\u001f", 00:09:43.125 "method": "nvmf_create_subsystem", 00:09:43.125 "req_id": 1 00:09:43.125 } 00:09:43.125 Got JSON-RPC error response 00:09:43.125 response: 00:09:43.125 { 00:09:43.125 "code": -32602, 00:09:43.125 "message": "Invalid MN SPDK_Controller\u001f" 00:09:43.125 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.125 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.386 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ Q == \- ]] 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Q<)hjJ#jGD\.U%H_(Wp6Y' 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Q<)hjJ#jGD\.U%H_(Wp6Y' nqn.2016-06.io.spdk:cnode30574 00:09:43.387 [2024-07-15 21:00:10.613532] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30574: invalid serial number 'Q<)hjJ#jGD\.U%H_(Wp6Y' 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:43.387 { 00:09:43.387 "nqn": "nqn.2016-06.io.spdk:cnode30574", 00:09:43.387 "serial_number": "Q<)hjJ#jGD\\.U%H_(Wp6Y", 00:09:43.387 "method": "nvmf_create_subsystem", 00:09:43.387 "req_id": 1 00:09:43.387 } 00:09:43.387 Got JSON-RPC error response 00:09:43.387 response: 00:09:43.387 { 00:09:43.387 "code": -32602, 00:09:43.387 "message": "Invalid SN Q<)hjJ#jGD\\.U%H_(Wp6Y" 00:09:43.387 }' 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:43.387 { 00:09:43.387 "nqn": "nqn.2016-06.io.spdk:cnode30574", 00:09:43.387 "serial_number": "Q<)hjJ#jGD\\.U%H_(Wp6Y", 00:09:43.387 "method": "nvmf_create_subsystem", 00:09:43.387 "req_id": 1 00:09:43.387 } 00:09:43.387 Got JSON-RPC error response 00:09:43.387 response: 00:09:43.387 { 00:09:43.387 "code": -32602, 00:09:43.387 "message": "Invalid SN Q<)hjJ#jGD\\.U%H_(Wp6Y" 00:09:43.387 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.387 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.650 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.651 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.914 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:43.914 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:43.914 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:43.914 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.914 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.914 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:09:43.914 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:43.914 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:09:43.914 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:43.914 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:43.914 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ b == \- ]] 00:09:43.914 21:00:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'b}bm~#ZykS(i-s8|]NxIc*C /dev/null' 00:09:45.739 21:00:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.718 21:00:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:47.718 00:09:47.718 real 0m14.519s 00:09:47.718 user 0m19.520s 00:09:47.718 sys 0m7.182s 00:09:47.718 21:00:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:47.718 21:00:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:47.718 ************************************ 00:09:47.718 END TEST nvmf_invalid 00:09:47.718 ************************************ 00:09:48.021 21:00:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:48.021 21:00:14 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:48.021 21:00:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:48.021 21:00:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.021 21:00:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:48.021 ************************************ 00:09:48.021 START TEST nvmf_abort 00:09:48.021 ************************************ 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:48.021 * Looking for test storage... 00:09:48.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.021 21:00:15 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:48.022 21:00:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:56.160 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:56.160 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.160 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:56.161 Found net devices under 0000:31:00.0: cvl_0_0 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:56.161 Found net devices under 0000:31:00.1: cvl_0_1 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:56.161 21:00:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:56.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:09:56.161 00:09:56.161 --- 10.0.0.2 ping statistics --- 00:09:56.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.161 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:56.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:09:56.161 00:09:56.161 --- 10.0.0.1 ping statistics --- 00:09:56.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.161 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1817169 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1817169 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1817169 ']' 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:56.161 21:00:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:56.161 [2024-07-15 21:00:23.402885] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:09:56.161 [2024-07-15 21:00:23.402948] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.161 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.420 [2024-07-15 21:00:23.498756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:56.420 [2024-07-15 21:00:23.593620] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.420 [2024-07-15 21:00:23.593681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.420 [2024-07-15 21:00:23.593689] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.420 [2024-07-15 21:00:23.593696] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.420 [2024-07-15 21:00:23.593702] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.420 [2024-07-15 21:00:23.593841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.421 [2024-07-15 21:00:23.594004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.421 [2024-07-15 21:00:23.594004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.991 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:56.991 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:56.991 21:00:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:56.991 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:56.991 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:56.991 21:00:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.991 21:00:24 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:56.991 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.991 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:56.991 [2024-07-15 21:00:24.238660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.991 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.991 21:00:24 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:56.991 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.991 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:57.251 Malloc0 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:57.251 Delay0 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:57.251 [2024-07-15 21:00:24.328343] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.251 21:00:24 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:57.251 EAL: No free 2048 kB hugepages reported on node 1 00:09:57.251 [2024-07-15 21:00:24.449832] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:59.798 Initializing NVMe Controllers 00:09:59.798 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:59.798 controller IO queue size 128 less than required 00:09:59.798 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:59.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:59.798 Initialization complete. Launching workers. 00:09:59.798 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 34209 00:09:59.798 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34274, failed to submit 62 00:09:59.798 success 34213, unsuccess 61, failed 0 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:59.798 rmmod nvme_tcp 00:09:59.798 rmmod nvme_fabrics 00:09:59.798 rmmod nvme_keyring 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1817169 ']' 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1817169 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1817169 ']' 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1817169 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1817169 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1817169' 00:09:59.798 killing process with pid 1817169 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1817169 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1817169 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:59.798 21:00:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.705 21:00:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:01.705 00:10:01.705 real 0m13.897s 00:10:01.705 user 0m13.703s 00:10:01.705 sys 0m6.992s 00:10:01.705 21:00:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:01.705 21:00:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:01.705 ************************************ 00:10:01.705 END TEST nvmf_abort 00:10:01.705 ************************************ 00:10:01.705 21:00:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:01.705 21:00:28 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:01.705 21:00:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:01.705 21:00:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.705 21:00:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:01.705 ************************************ 00:10:01.705 START TEST nvmf_ns_hotplug_stress 00:10:01.705 ************************************ 00:10:01.705 21:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:01.965 * Looking for test storage... 00:10:01.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:01.965 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.965 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:01.965 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.965 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.965 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.965 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.965 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.965 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.965 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.965 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.965 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.965 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.965 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:01.965 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:01.965 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.965 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.965 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.965 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:01.966 21:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:10.137 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.137 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:10.138 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:10.138 Found net devices under 0000:31:00.0: cvl_0_0 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:10.138 Found net devices under 0000:31:00.1: cvl_0_1 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:10.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:10:10.138 00:10:10.138 --- 10.0.0.2 ping statistics --- 00:10:10.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.138 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:10.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:10:10.138 00:10:10.138 --- 10.0.0.1 ping statistics --- 00:10:10.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.138 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1822555 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1822555 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1822555 ']' 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.138 21:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:10.400 [2024-07-15 21:00:37.436272] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:10:10.400 [2024-07-15 21:00:37.436338] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.400 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.400 [2024-07-15 21:00:37.532481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:10.400 [2024-07-15 21:00:37.626823] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.400 [2024-07-15 21:00:37.626885] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.400 [2024-07-15 21:00:37.626894] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.400 [2024-07-15 21:00:37.626901] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.400 [2024-07-15 21:00:37.626908] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.400 [2024-07-15 21:00:37.627039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.400 [2024-07-15 21:00:37.627203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.400 [2024-07-15 21:00:37.627203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:10.970 21:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:10.970 21:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:10.970 21:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:10.970 21:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:10.970 21:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:10.970 21:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:10.970 21:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:10.970 21:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:11.229 [2024-07-15 21:00:38.388553] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.229 21:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:11.489 21:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.489 [2024-07-15 21:00:38.725874] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.489 21:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:11.750 21:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:12.011 Malloc0 00:10:12.011 21:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:12.011 Delay0 00:10:12.011 21:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.271 21:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:12.534 NULL1 00:10:12.534 21:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:12.534 21:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1823216 00:10:12.534 21:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:12.534 21:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:12.534 21:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.534 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.795 21:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.055 21:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:13.055 21:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:13.055 true 00:10:13.055 21:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:13.055 21:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.316 21:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.577 21:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:13.577 21:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:13.577 true 00:10:13.577 21:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:13.577 21:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.838 21:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.838 21:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:13.838 21:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:14.099 true 00:10:14.099 21:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:14.099 21:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.360 21:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.360 21:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:14.360 21:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:14.628 true 00:10:14.628 21:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:14.628 21:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.889 21:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.889 21:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:14.889 21:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:15.150 true 00:10:15.150 21:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:15.150 21:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.150 21:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.411 21:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:15.411 21:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:15.671 true 00:10:15.671 21:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:15.671 21:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.671 21:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.933 21:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:15.933 21:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:16.193 true 00:10:16.193 21:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:16.193 21:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.193 21:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.453 21:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:16.453 21:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:16.453 true 00:10:16.713 21:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:16.713 21:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.713 21:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.974 21:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:16.974 21:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:16.974 true 00:10:16.974 21:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:16.974 21:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.235 21:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.495 21:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:17.495 21:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:17.495 true 00:10:17.495 21:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:17.495 21:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.755 21:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.015 21:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:18.015 21:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:18.015 true 00:10:18.015 21:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:18.015 21:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.274 21:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.535 21:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:18.535 21:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:18.535 true 00:10:18.535 21:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:18.535 21:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.796 21:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.796 21:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:18.796 21:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:19.056 true 00:10:19.056 21:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:19.056 21:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.315 21:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.315 21:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:19.315 21:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:19.575 true 00:10:19.575 21:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:19.575 21:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.835 21:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.835 21:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:19.835 21:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:20.095 true 00:10:20.095 21:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:20.095 21:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.095 21:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.354 21:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:20.354 21:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:20.615 true 00:10:20.615 21:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:20.615 21:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.615 21:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.874 21:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:20.874 21:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:21.134 true 00:10:21.134 21:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:21.134 21:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.134 21:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.393 21:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:21.393 21:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:21.393 true 00:10:21.652 21:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:21.652 21:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.652 21:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.911 21:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:21.911 21:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:21.911 true 00:10:21.911 21:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:21.911 21:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.170 21:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.441 21:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:22.441 21:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:22.441 true 00:10:22.441 21:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:22.441 21:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.701 21:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.961 21:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:22.961 21:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:22.961 true 00:10:22.961 21:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:22.961 21:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.220 21:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.478 21:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:23.478 21:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:23.478 true 00:10:23.478 21:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:23.479 21:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.737 21:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.998 21:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:23.998 21:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:23.998 true 00:10:23.998 21:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:23.998 21:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.258 21:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.258 21:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:24.258 21:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:24.519 true 00:10:24.519 21:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:24.519 21:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.779 21:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.779 21:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:24.779 21:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:25.039 true 00:10:25.039 21:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:25.039 21:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.299 21:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.299 21:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:25.299 21:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:25.561 true 00:10:25.561 21:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:25.561 21:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.822 21:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.822 21:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:25.822 21:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:26.083 true 00:10:26.083 21:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:26.083 21:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.083 21:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.345 21:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:26.345 21:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:26.607 true 00:10:26.607 21:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:26.607 21:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.607 21:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.866 21:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:26.866 21:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:27.132 true 00:10:27.132 21:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:27.132 21:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.132 21:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.395 21:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:27.396 21:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:27.396 true 00:10:27.660 21:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:27.660 21:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.660 21:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.962 21:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:27.962 21:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:27.962 true 00:10:27.962 21:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:27.962 21:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.250 21:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.510 21:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:28.510 21:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:28.510 true 00:10:28.510 21:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:28.510 21:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.769 21:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.769 21:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:28.769 21:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:29.029 true 00:10:29.029 21:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:29.029 21:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.288 21:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.288 21:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:29.288 21:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:29.548 true 00:10:29.548 21:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:29.548 21:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.807 21:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.807 21:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:29.807 21:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:30.066 true 00:10:30.066 21:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:30.066 21:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.326 21:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.326 21:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:30.326 21:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:30.589 true 00:10:30.589 21:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:30.589 21:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.850 21:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.850 21:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:30.850 21:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:31.110 true 00:10:31.110 21:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:31.110 21:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.110 21:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.370 21:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:31.370 21:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:31.630 true 00:10:31.630 21:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:31.630 21:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.630 21:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.890 21:00:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:31.890 21:00:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:32.152 true 00:10:32.152 21:00:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:32.152 21:00:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.152 21:00:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.413 21:00:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:32.413 21:00:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:32.413 true 00:10:32.674 21:00:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:32.674 21:00:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.674 21:00:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.936 21:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:32.936 21:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:32.936 true 00:10:33.197 21:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:33.197 21:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.197 21:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.457 21:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:33.457 21:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:33.457 true 00:10:33.457 21:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:33.457 21:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.716 21:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.976 21:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:33.976 21:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:33.976 true 00:10:33.976 21:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:33.976 21:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.238 21:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.238 21:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:34.238 21:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:34.498 true 00:10:34.498 21:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:34.498 21:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.759 21:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.759 21:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:34.759 21:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:35.019 true 00:10:35.019 21:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:35.019 21:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.280 21:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.280 21:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:35.280 21:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:35.540 true 00:10:35.540 21:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:35.540 21:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.540 21:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.808 21:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:35.808 21:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:36.068 true 00:10:36.068 21:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:36.068 21:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.068 21:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.328 21:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:36.328 21:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:36.328 true 00:10:36.588 21:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:36.588 21:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.588 21:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.848 21:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:36.848 21:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:36.848 true 00:10:36.848 21:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:36.848 21:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.109 21:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.369 21:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:37.369 21:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:37.369 true 00:10:37.370 21:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:37.370 21:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.630 21:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.630 21:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:37.630 21:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:37.891 true 00:10:37.891 21:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:37.891 21:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.151 21:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.151 21:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:10:38.151 21:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:38.411 true 00:10:38.411 21:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:38.411 21:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.672 21:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.672 21:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:10:38.672 21:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:10:38.932 true 00:10:38.932 21:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:38.932 21:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.192 21:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.192 21:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:10:39.192 21:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:10:39.456 true 00:10:39.456 21:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:39.456 21:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.715 21:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.715 21:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:10:39.715 21:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:10:39.974 true 00:10:39.974 21:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:39.974 21:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.233 21:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.233 21:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:10:40.233 21:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:10:40.493 true 00:10:40.493 21:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:40.493 21:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.753 21:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.753 21:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:10:40.753 21:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:10:41.012 true 00:10:41.012 21:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:41.012 21:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.271 21:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.271 21:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:10:41.292 21:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:10:41.552 true 00:10:41.552 21:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:41.552 21:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.552 21:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.811 21:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:10:41.811 21:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:10:42.070 true 00:10:42.070 21:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:42.070 21:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.070 21:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.329 21:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:10:42.329 21:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:10:42.589 true 00:10:42.589 21:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:42.590 21:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.590 21:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.863 Initializing NVMe Controllers 00:10:42.863 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:42.863 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:10:42.863 Controller IO queue size 128, less than required. 00:10:42.863 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:42.863 WARNING: Some requested NVMe devices were skipped 00:10:42.863 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:42.863 Initialization complete. Launching workers. 00:10:42.863 ======================================================== 00:10:42.863 Latency(us) 00:10:42.863 Device Information : IOPS MiB/s Average min max 00:10:42.863 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31372.00 15.32 4079.87 1655.99 9429.01 00:10:42.863 ======================================================== 00:10:42.863 Total : 31372.00 15.32 4079.87 1655.99 9429.01 00:10:42.863 00:10:42.863 21:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1061 00:10:42.863 21:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:10:42.863 true 00:10:43.124 21:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823216 00:10:43.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1823216) - No such process 00:10:43.124 21:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1823216 00:10:43.124 21:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.124 21:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:43.385 21:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:43.385 21:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:43.385 21:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:43.385 21:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:43.385 21:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:43.385 null0 00:10:43.385 21:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:43.385 21:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:43.385 21:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:43.691 null1 00:10:43.691 21:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:43.691 21:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:43.691 21:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:43.691 null2 00:10:43.951 21:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:43.951 21:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:43.951 21:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:43.951 null3 00:10:43.951 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:43.951 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:43.951 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:44.211 null4 00:10:44.211 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:44.211 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:44.211 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:44.211 null5 00:10:44.211 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:44.211 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:44.211 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:44.472 null6 00:10:44.472 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:44.472 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:44.472 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:44.733 null7 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:44.733 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1829599 1829601 1829605 1829607 1829610 1829613 1829615 1829618 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:44.734 21:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:44.734 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:44.734 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:44.734 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:44.734 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.994 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:44.995 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.255 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:45.255 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:45.255 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:45.255 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:45.255 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:45.255 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:45.255 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.255 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.256 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:45.256 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:45.256 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.256 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.256 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:45.256 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.256 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.256 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:45.256 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.256 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.256 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:45.256 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.256 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.256 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:45.256 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.256 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.256 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:45.256 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.515 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.515 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.515 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:45.515 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.515 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.515 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:45.515 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:45.515 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:45.515 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.515 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.515 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:45.515 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:45.515 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:45.515 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:45.515 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:45.515 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:45.775 21:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:45.775 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:45.775 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.775 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.775 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:45.775 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:45.775 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.036 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:46.037 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.297 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:46.557 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.557 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.557 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:46.557 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:46.557 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.557 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.557 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:46.557 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:46.557 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:46.557 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:46.557 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:46.557 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:46.557 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:46.557 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.557 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.557 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:46.817 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.818 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.818 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:46.818 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.818 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.818 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:46.818 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.818 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.818 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:46.818 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.818 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.818 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.818 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:46.818 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.818 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:46.818 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.818 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.818 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.818 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:46.818 21:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:46.818 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:46.818 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:46.818 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:46.818 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:46.818 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.818 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.818 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:47.079 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:47.079 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:47.079 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.079 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.079 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:47.079 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.079 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.079 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:47.079 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.080 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.080 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:47.080 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.080 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.080 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:47.080 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.080 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.080 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:47.080 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.080 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.080 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:47.080 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.080 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.080 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:47.080 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:47.080 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.080 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.340 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.600 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:47.601 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:47.601 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.601 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.601 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:47.858 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:47.858 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.858 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.858 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:47.858 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:47.858 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:47.858 21:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.858 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:47.858 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.858 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.858 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:47.858 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:47.858 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.858 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.858 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:48.118 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:48.118 rmmod nvme_tcp 00:10:48.377 rmmod nvme_fabrics 00:10:48.377 rmmod nvme_keyring 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1822555 ']' 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1822555 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1822555 ']' 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1822555 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1822555 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1822555' 00:10:48.377 killing process with pid 1822555 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1822555 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1822555 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:48.377 21:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.917 21:01:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:50.917 00:10:50.917 real 0m48.722s 00:10:50.917 user 3m14.712s 00:10:50.917 sys 0m17.429s 00:10:50.917 21:01:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:50.917 21:01:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.917 ************************************ 00:10:50.917 END TEST nvmf_ns_hotplug_stress 00:10:50.917 ************************************ 00:10:50.917 21:01:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:50.917 21:01:17 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:50.917 21:01:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:50.917 21:01:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.917 21:01:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:50.917 ************************************ 00:10:50.917 START TEST nvmf_connect_stress 00:10:50.917 ************************************ 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:50.917 * Looking for test storage... 00:10:50.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:50.917 21:01:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:59.058 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:59.058 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:59.058 Found net devices under 0000:31:00.0: cvl_0_0 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:59.058 Found net devices under 0000:31:00.1: cvl_0_1 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:59.058 21:01:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:59.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:59.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:10:59.058 00:10:59.058 --- 10.0.0.2 ping statistics --- 00:10:59.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.058 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:59.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:59.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.433 ms 00:10:59.058 00:10:59.058 --- 10.0.0.1 ping statistics --- 00:10:59.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.058 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1835302 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1835302 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1835302 ']' 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:59.058 21:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.058 [2024-07-15 21:01:26.204950] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:10:59.058 [2024-07-15 21:01:26.205014] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.058 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.058 [2024-07-15 21:01:26.304572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:59.319 [2024-07-15 21:01:26.400068] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.319 [2024-07-15 21:01:26.400151] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.319 [2024-07-15 21:01:26.400160] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.319 [2024-07-15 21:01:26.400167] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.319 [2024-07-15 21:01:26.400173] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.319 [2024-07-15 21:01:26.400339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.319 [2024-07-15 21:01:26.400553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.319 [2024-07-15 21:01:26.400554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.889 21:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:59.889 21:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:59.889 21:01:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:59.889 21:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:59.889 21:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.889 [2024-07-15 21:01:27.037734] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.889 [2024-07-15 21:01:27.079395] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.889 NULL1 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1835417 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:59.889 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:59.889 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:59.890 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.150 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.150 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.150 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.150 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.150 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.150 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.150 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.150 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.150 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:00.150 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.150 21:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.150 21:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.410 21:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.410 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:00.410 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.410 21:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.410 21:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.675 21:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.675 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:00.675 21:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.675 21:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.675 21:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.963 21:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.963 21:01:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:00.963 21:01:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.963 21:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.963 21:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.280 21:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.280 21:01:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:01.280 21:01:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.280 21:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.280 21:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.540 21:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.540 21:01:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:01.540 21:01:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.540 21:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.540 21:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.111 21:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.111 21:01:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:02.111 21:01:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.111 21:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.111 21:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.370 21:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.370 21:01:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:02.370 21:01:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.370 21:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.370 21:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.631 21:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.631 21:01:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:02.631 21:01:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.631 21:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.631 21:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.890 21:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.890 21:01:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:02.890 21:01:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.890 21:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.890 21:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.460 21:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.460 21:01:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:03.460 21:01:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.460 21:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.460 21:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.719 21:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.719 21:01:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:03.719 21:01:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.719 21:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.719 21:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.979 21:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.979 21:01:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:03.979 21:01:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.979 21:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.979 21:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.240 21:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.240 21:01:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:04.240 21:01:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.240 21:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.240 21:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.499 21:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.499 21:01:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:04.499 21:01:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.499 21:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.499 21:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.067 21:01:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.067 21:01:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:05.067 21:01:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.067 21:01:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.067 21:01:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.327 21:01:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.327 21:01:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:05.327 21:01:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.327 21:01:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.327 21:01:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.587 21:01:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.587 21:01:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:05.587 21:01:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.587 21:01:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.587 21:01:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.846 21:01:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.846 21:01:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:05.846 21:01:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.846 21:01:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.846 21:01:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.117 21:01:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.117 21:01:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:06.117 21:01:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.117 21:01:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.117 21:01:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.691 21:01:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.691 21:01:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:06.691 21:01:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.691 21:01:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.691 21:01:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.951 21:01:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.951 21:01:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:06.951 21:01:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.951 21:01:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.951 21:01:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.212 21:01:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.212 21:01:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:07.212 21:01:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.212 21:01:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.212 21:01:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.473 21:01:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.473 21:01:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:07.473 21:01:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.473 21:01:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.473 21:01:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.733 21:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.733 21:01:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:07.733 21:01:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.733 21:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.733 21:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.303 21:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.303 21:01:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:08.303 21:01:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.303 21:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.303 21:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.563 21:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.563 21:01:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:08.563 21:01:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.563 21:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.563 21:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.824 21:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.824 21:01:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:08.824 21:01:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.824 21:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.824 21:01:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.085 21:01:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.085 21:01:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:09.085 21:01:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.085 21:01:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.085 21:01:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.656 21:01:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.656 21:01:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:09.656 21:01:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.656 21:01:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.656 21:01:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.917 21:01:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.917 21:01:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:09.917 21:01:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.917 21:01:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.917 21:01:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.178 21:01:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.178 21:01:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:10.178 21:01:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.178 21:01:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.178 21:01:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.178 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1835417 00:11:10.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1835417) - No such process 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1835417 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:10.439 rmmod nvme_tcp 00:11:10.439 rmmod nvme_fabrics 00:11:10.439 rmmod nvme_keyring 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1835302 ']' 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1835302 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1835302 ']' 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1835302 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:10.439 21:01:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1835302 00:11:10.700 21:01:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:10.700 21:01:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:10.700 21:01:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1835302' 00:11:10.700 killing process with pid 1835302 00:11:10.700 21:01:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1835302 00:11:10.700 21:01:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1835302 00:11:10.700 21:01:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:10.700 21:01:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:10.700 21:01:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:10.700 21:01:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:10.700 21:01:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:10.700 21:01:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.700 21:01:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.700 21:01:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.256 21:01:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:13.256 00:11:13.256 real 0m22.154s 00:11:13.256 user 0m43.584s 00:11:13.256 sys 0m9.350s 00:11:13.256 21:01:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:13.256 21:01:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.256 ************************************ 00:11:13.256 END TEST nvmf_connect_stress 00:11:13.256 ************************************ 00:11:13.256 21:01:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:13.256 21:01:39 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:13.256 21:01:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:13.256 21:01:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:13.256 21:01:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:13.256 ************************************ 00:11:13.256 START TEST nvmf_fused_ordering 00:11:13.256 ************************************ 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:13.256 * Looking for test storage... 00:11:13.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:13.256 21:01:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.392 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.392 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:21.392 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:21.392 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:21.392 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:21.392 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:21.392 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:21.392 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:21.392 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:21.392 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:21.392 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:21.392 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:21.392 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:21.392 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:21.392 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:21.392 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.392 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.392 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:21.393 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:21.393 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:21.393 Found net devices under 0000:31:00.0: cvl_0_0 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:21.393 Found net devices under 0000:31:00.1: cvl_0_1 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.393 21:01:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:21.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:11:21.393 00:11:21.393 --- 10.0.0.2 ping statistics --- 00:11:21.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.393 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:11:21.393 00:11:21.393 --- 10.0.0.1 ping statistics --- 00:11:21.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.393 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1842365 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1842365 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1842365 ']' 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:21.393 21:01:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.393 [2024-07-15 21:01:48.380805] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:11:21.393 [2024-07-15 21:01:48.380854] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.393 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.393 [2024-07-15 21:01:48.472257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.393 [2024-07-15 21:01:48.545676] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.393 [2024-07-15 21:01:48.545729] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.393 [2024-07-15 21:01:48.545737] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.393 [2024-07-15 21:01:48.545743] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.393 [2024-07-15 21:01:48.545749] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.393 [2024-07-15 21:01:48.545782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.965 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:21.965 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:21.965 21:01:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:21.965 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:21.965 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.965 21:01:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.965 21:01:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.965 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.965 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.965 [2024-07-15 21:01:49.207009] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.965 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.965 21:01:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:21.965 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.965 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.965 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.966 21:01:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.966 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.966 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.966 [2024-07-15 21:01:49.231282] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.966 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.966 21:01:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:21.966 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.966 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.966 NULL1 00:11:21.966 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.966 21:01:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:21.966 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.966 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.227 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.227 21:01:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:22.227 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.227 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.227 21:01:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.227 21:01:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:22.227 [2024-07-15 21:01:49.300071] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:11:22.227 [2024-07-15 21:01:49.300119] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1842403 ] 00:11:22.227 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.489 Attached to nqn.2016-06.io.spdk:cnode1 00:11:22.489 Namespace ID: 1 size: 1GB 00:11:22.489 fused_ordering(0) 00:11:22.489 fused_ordering(1) 00:11:22.489 fused_ordering(2) 00:11:22.489 fused_ordering(3) 00:11:22.489 fused_ordering(4) 00:11:22.489 fused_ordering(5) 00:11:22.489 fused_ordering(6) 00:11:22.489 fused_ordering(7) 00:11:22.489 fused_ordering(8) 00:11:22.489 fused_ordering(9) 00:11:22.489 fused_ordering(10) 00:11:22.489 fused_ordering(11) 00:11:22.489 fused_ordering(12) 00:11:22.489 fused_ordering(13) 00:11:22.489 fused_ordering(14) 00:11:22.489 fused_ordering(15) 00:11:22.489 fused_ordering(16) 00:11:22.489 fused_ordering(17) 00:11:22.489 fused_ordering(18) 00:11:22.489 fused_ordering(19) 00:11:22.489 fused_ordering(20) 00:11:22.489 fused_ordering(21) 00:11:22.489 fused_ordering(22) 00:11:22.489 fused_ordering(23) 00:11:22.489 fused_ordering(24) 00:11:22.489 fused_ordering(25) 00:11:22.489 fused_ordering(26) 00:11:22.489 fused_ordering(27) 00:11:22.489 fused_ordering(28) 00:11:22.489 fused_ordering(29) 00:11:22.489 fused_ordering(30) 00:11:22.490 fused_ordering(31) 00:11:22.490 fused_ordering(32) 00:11:22.490 fused_ordering(33) 00:11:22.490 fused_ordering(34) 00:11:22.490 fused_ordering(35) 00:11:22.490 fused_ordering(36) 00:11:22.490 fused_ordering(37) 00:11:22.490 fused_ordering(38) 00:11:22.490 fused_ordering(39) 00:11:22.490 fused_ordering(40) 00:11:22.490 fused_ordering(41) 00:11:22.490 fused_ordering(42) 00:11:22.490 fused_ordering(43) 00:11:22.490 fused_ordering(44) 00:11:22.490 fused_ordering(45) 00:11:22.490 fused_ordering(46) 00:11:22.490 fused_ordering(47) 00:11:22.490 fused_ordering(48) 00:11:22.490 fused_ordering(49) 00:11:22.490 fused_ordering(50) 00:11:22.490 fused_ordering(51) 00:11:22.490 fused_ordering(52) 00:11:22.490 fused_ordering(53) 00:11:22.490 fused_ordering(54) 00:11:22.490 fused_ordering(55) 00:11:22.490 fused_ordering(56) 00:11:22.490 fused_ordering(57) 00:11:22.490 fused_ordering(58) 00:11:22.490 fused_ordering(59) 00:11:22.490 fused_ordering(60) 00:11:22.490 fused_ordering(61) 00:11:22.490 fused_ordering(62) 00:11:22.490 fused_ordering(63) 00:11:22.490 fused_ordering(64) 00:11:22.490 fused_ordering(65) 00:11:22.490 fused_ordering(66) 00:11:22.490 fused_ordering(67) 00:11:22.490 fused_ordering(68) 00:11:22.490 fused_ordering(69) 00:11:22.490 fused_ordering(70) 00:11:22.490 fused_ordering(71) 00:11:22.490 fused_ordering(72) 00:11:22.490 fused_ordering(73) 00:11:22.490 fused_ordering(74) 00:11:22.490 fused_ordering(75) 00:11:22.490 fused_ordering(76) 00:11:22.490 fused_ordering(77) 00:11:22.490 fused_ordering(78) 00:11:22.490 fused_ordering(79) 00:11:22.490 fused_ordering(80) 00:11:22.490 fused_ordering(81) 00:11:22.490 fused_ordering(82) 00:11:22.490 fused_ordering(83) 00:11:22.490 fused_ordering(84) 00:11:22.490 fused_ordering(85) 00:11:22.490 fused_ordering(86) 00:11:22.490 fused_ordering(87) 00:11:22.490 fused_ordering(88) 00:11:22.490 fused_ordering(89) 00:11:22.490 fused_ordering(90) 00:11:22.490 fused_ordering(91) 00:11:22.490 fused_ordering(92) 00:11:22.490 fused_ordering(93) 00:11:22.490 fused_ordering(94) 00:11:22.490 fused_ordering(95) 00:11:22.490 fused_ordering(96) 00:11:22.490 fused_ordering(97) 00:11:22.490 fused_ordering(98) 00:11:22.490 fused_ordering(99) 00:11:22.490 fused_ordering(100) 00:11:22.490 fused_ordering(101) 00:11:22.490 fused_ordering(102) 00:11:22.490 fused_ordering(103) 00:11:22.490 fused_ordering(104) 00:11:22.490 fused_ordering(105) 00:11:22.490 fused_ordering(106) 00:11:22.490 fused_ordering(107) 00:11:22.490 fused_ordering(108) 00:11:22.490 fused_ordering(109) 00:11:22.490 fused_ordering(110) 00:11:22.490 fused_ordering(111) 00:11:22.490 fused_ordering(112) 00:11:22.490 fused_ordering(113) 00:11:22.490 fused_ordering(114) 00:11:22.490 fused_ordering(115) 00:11:22.490 fused_ordering(116) 00:11:22.490 fused_ordering(117) 00:11:22.490 fused_ordering(118) 00:11:22.490 fused_ordering(119) 00:11:22.490 fused_ordering(120) 00:11:22.490 fused_ordering(121) 00:11:22.490 fused_ordering(122) 00:11:22.490 fused_ordering(123) 00:11:22.490 fused_ordering(124) 00:11:22.490 fused_ordering(125) 00:11:22.490 fused_ordering(126) 00:11:22.490 fused_ordering(127) 00:11:22.490 fused_ordering(128) 00:11:22.490 fused_ordering(129) 00:11:22.490 fused_ordering(130) 00:11:22.490 fused_ordering(131) 00:11:22.490 fused_ordering(132) 00:11:22.490 fused_ordering(133) 00:11:22.490 fused_ordering(134) 00:11:22.490 fused_ordering(135) 00:11:22.490 fused_ordering(136) 00:11:22.490 fused_ordering(137) 00:11:22.490 fused_ordering(138) 00:11:22.490 fused_ordering(139) 00:11:22.490 fused_ordering(140) 00:11:22.490 fused_ordering(141) 00:11:22.490 fused_ordering(142) 00:11:22.490 fused_ordering(143) 00:11:22.490 fused_ordering(144) 00:11:22.490 fused_ordering(145) 00:11:22.490 fused_ordering(146) 00:11:22.490 fused_ordering(147) 00:11:22.490 fused_ordering(148) 00:11:22.490 fused_ordering(149) 00:11:22.490 fused_ordering(150) 00:11:22.490 fused_ordering(151) 00:11:22.490 fused_ordering(152) 00:11:22.490 fused_ordering(153) 00:11:22.490 fused_ordering(154) 00:11:22.490 fused_ordering(155) 00:11:22.490 fused_ordering(156) 00:11:22.490 fused_ordering(157) 00:11:22.490 fused_ordering(158) 00:11:22.490 fused_ordering(159) 00:11:22.490 fused_ordering(160) 00:11:22.490 fused_ordering(161) 00:11:22.490 fused_ordering(162) 00:11:22.490 fused_ordering(163) 00:11:22.490 fused_ordering(164) 00:11:22.490 fused_ordering(165) 00:11:22.490 fused_ordering(166) 00:11:22.490 fused_ordering(167) 00:11:22.490 fused_ordering(168) 00:11:22.490 fused_ordering(169) 00:11:22.490 fused_ordering(170) 00:11:22.490 fused_ordering(171) 00:11:22.490 fused_ordering(172) 00:11:22.490 fused_ordering(173) 00:11:22.490 fused_ordering(174) 00:11:22.490 fused_ordering(175) 00:11:22.490 fused_ordering(176) 00:11:22.490 fused_ordering(177) 00:11:22.490 fused_ordering(178) 00:11:22.490 fused_ordering(179) 00:11:22.490 fused_ordering(180) 00:11:22.490 fused_ordering(181) 00:11:22.490 fused_ordering(182) 00:11:22.490 fused_ordering(183) 00:11:22.490 fused_ordering(184) 00:11:22.490 fused_ordering(185) 00:11:22.490 fused_ordering(186) 00:11:22.490 fused_ordering(187) 00:11:22.490 fused_ordering(188) 00:11:22.490 fused_ordering(189) 00:11:22.490 fused_ordering(190) 00:11:22.490 fused_ordering(191) 00:11:22.490 fused_ordering(192) 00:11:22.490 fused_ordering(193) 00:11:22.490 fused_ordering(194) 00:11:22.490 fused_ordering(195) 00:11:22.490 fused_ordering(196) 00:11:22.490 fused_ordering(197) 00:11:22.490 fused_ordering(198) 00:11:22.490 fused_ordering(199) 00:11:22.490 fused_ordering(200) 00:11:22.490 fused_ordering(201) 00:11:22.490 fused_ordering(202) 00:11:22.490 fused_ordering(203) 00:11:22.490 fused_ordering(204) 00:11:22.490 fused_ordering(205) 00:11:23.063 fused_ordering(206) 00:11:23.063 fused_ordering(207) 00:11:23.063 fused_ordering(208) 00:11:23.063 fused_ordering(209) 00:11:23.063 fused_ordering(210) 00:11:23.063 fused_ordering(211) 00:11:23.063 fused_ordering(212) 00:11:23.063 fused_ordering(213) 00:11:23.063 fused_ordering(214) 00:11:23.063 fused_ordering(215) 00:11:23.063 fused_ordering(216) 00:11:23.063 fused_ordering(217) 00:11:23.063 fused_ordering(218) 00:11:23.063 fused_ordering(219) 00:11:23.063 fused_ordering(220) 00:11:23.063 fused_ordering(221) 00:11:23.063 fused_ordering(222) 00:11:23.063 fused_ordering(223) 00:11:23.063 fused_ordering(224) 00:11:23.063 fused_ordering(225) 00:11:23.063 fused_ordering(226) 00:11:23.063 fused_ordering(227) 00:11:23.063 fused_ordering(228) 00:11:23.063 fused_ordering(229) 00:11:23.063 fused_ordering(230) 00:11:23.063 fused_ordering(231) 00:11:23.063 fused_ordering(232) 00:11:23.063 fused_ordering(233) 00:11:23.063 fused_ordering(234) 00:11:23.063 fused_ordering(235) 00:11:23.063 fused_ordering(236) 00:11:23.063 fused_ordering(237) 00:11:23.063 fused_ordering(238) 00:11:23.063 fused_ordering(239) 00:11:23.063 fused_ordering(240) 00:11:23.063 fused_ordering(241) 00:11:23.063 fused_ordering(242) 00:11:23.063 fused_ordering(243) 00:11:23.063 fused_ordering(244) 00:11:23.063 fused_ordering(245) 00:11:23.063 fused_ordering(246) 00:11:23.063 fused_ordering(247) 00:11:23.063 fused_ordering(248) 00:11:23.063 fused_ordering(249) 00:11:23.063 fused_ordering(250) 00:11:23.063 fused_ordering(251) 00:11:23.063 fused_ordering(252) 00:11:23.063 fused_ordering(253) 00:11:23.063 fused_ordering(254) 00:11:23.063 fused_ordering(255) 00:11:23.063 fused_ordering(256) 00:11:23.063 fused_ordering(257) 00:11:23.063 fused_ordering(258) 00:11:23.063 fused_ordering(259) 00:11:23.063 fused_ordering(260) 00:11:23.063 fused_ordering(261) 00:11:23.063 fused_ordering(262) 00:11:23.063 fused_ordering(263) 00:11:23.063 fused_ordering(264) 00:11:23.063 fused_ordering(265) 00:11:23.063 fused_ordering(266) 00:11:23.063 fused_ordering(267) 00:11:23.063 fused_ordering(268) 00:11:23.063 fused_ordering(269) 00:11:23.063 fused_ordering(270) 00:11:23.063 fused_ordering(271) 00:11:23.063 fused_ordering(272) 00:11:23.063 fused_ordering(273) 00:11:23.063 fused_ordering(274) 00:11:23.063 fused_ordering(275) 00:11:23.063 fused_ordering(276) 00:11:23.063 fused_ordering(277) 00:11:23.063 fused_ordering(278) 00:11:23.063 fused_ordering(279) 00:11:23.063 fused_ordering(280) 00:11:23.063 fused_ordering(281) 00:11:23.063 fused_ordering(282) 00:11:23.063 fused_ordering(283) 00:11:23.063 fused_ordering(284) 00:11:23.063 fused_ordering(285) 00:11:23.063 fused_ordering(286) 00:11:23.063 fused_ordering(287) 00:11:23.063 fused_ordering(288) 00:11:23.063 fused_ordering(289) 00:11:23.063 fused_ordering(290) 00:11:23.063 fused_ordering(291) 00:11:23.063 fused_ordering(292) 00:11:23.063 fused_ordering(293) 00:11:23.063 fused_ordering(294) 00:11:23.063 fused_ordering(295) 00:11:23.063 fused_ordering(296) 00:11:23.063 fused_ordering(297) 00:11:23.063 fused_ordering(298) 00:11:23.063 fused_ordering(299) 00:11:23.063 fused_ordering(300) 00:11:23.063 fused_ordering(301) 00:11:23.063 fused_ordering(302) 00:11:23.063 fused_ordering(303) 00:11:23.063 fused_ordering(304) 00:11:23.063 fused_ordering(305) 00:11:23.063 fused_ordering(306) 00:11:23.063 fused_ordering(307) 00:11:23.063 fused_ordering(308) 00:11:23.063 fused_ordering(309) 00:11:23.063 fused_ordering(310) 00:11:23.063 fused_ordering(311) 00:11:23.063 fused_ordering(312) 00:11:23.063 fused_ordering(313) 00:11:23.063 fused_ordering(314) 00:11:23.063 fused_ordering(315) 00:11:23.063 fused_ordering(316) 00:11:23.063 fused_ordering(317) 00:11:23.063 fused_ordering(318) 00:11:23.063 fused_ordering(319) 00:11:23.063 fused_ordering(320) 00:11:23.063 fused_ordering(321) 00:11:23.063 fused_ordering(322) 00:11:23.063 fused_ordering(323) 00:11:23.063 fused_ordering(324) 00:11:23.063 fused_ordering(325) 00:11:23.063 fused_ordering(326) 00:11:23.063 fused_ordering(327) 00:11:23.063 fused_ordering(328) 00:11:23.063 fused_ordering(329) 00:11:23.063 fused_ordering(330) 00:11:23.063 fused_ordering(331) 00:11:23.063 fused_ordering(332) 00:11:23.063 fused_ordering(333) 00:11:23.063 fused_ordering(334) 00:11:23.063 fused_ordering(335) 00:11:23.063 fused_ordering(336) 00:11:23.063 fused_ordering(337) 00:11:23.063 fused_ordering(338) 00:11:23.063 fused_ordering(339) 00:11:23.063 fused_ordering(340) 00:11:23.063 fused_ordering(341) 00:11:23.063 fused_ordering(342) 00:11:23.063 fused_ordering(343) 00:11:23.063 fused_ordering(344) 00:11:23.063 fused_ordering(345) 00:11:23.063 fused_ordering(346) 00:11:23.063 fused_ordering(347) 00:11:23.063 fused_ordering(348) 00:11:23.063 fused_ordering(349) 00:11:23.063 fused_ordering(350) 00:11:23.063 fused_ordering(351) 00:11:23.063 fused_ordering(352) 00:11:23.063 fused_ordering(353) 00:11:23.063 fused_ordering(354) 00:11:23.063 fused_ordering(355) 00:11:23.063 fused_ordering(356) 00:11:23.063 fused_ordering(357) 00:11:23.063 fused_ordering(358) 00:11:23.063 fused_ordering(359) 00:11:23.063 fused_ordering(360) 00:11:23.063 fused_ordering(361) 00:11:23.063 fused_ordering(362) 00:11:23.064 fused_ordering(363) 00:11:23.064 fused_ordering(364) 00:11:23.064 fused_ordering(365) 00:11:23.064 fused_ordering(366) 00:11:23.064 fused_ordering(367) 00:11:23.064 fused_ordering(368) 00:11:23.064 fused_ordering(369) 00:11:23.064 fused_ordering(370) 00:11:23.064 fused_ordering(371) 00:11:23.064 fused_ordering(372) 00:11:23.064 fused_ordering(373) 00:11:23.064 fused_ordering(374) 00:11:23.064 fused_ordering(375) 00:11:23.064 fused_ordering(376) 00:11:23.064 fused_ordering(377) 00:11:23.064 fused_ordering(378) 00:11:23.064 fused_ordering(379) 00:11:23.064 fused_ordering(380) 00:11:23.064 fused_ordering(381) 00:11:23.064 fused_ordering(382) 00:11:23.064 fused_ordering(383) 00:11:23.064 fused_ordering(384) 00:11:23.064 fused_ordering(385) 00:11:23.064 fused_ordering(386) 00:11:23.064 fused_ordering(387) 00:11:23.064 fused_ordering(388) 00:11:23.064 fused_ordering(389) 00:11:23.064 fused_ordering(390) 00:11:23.064 fused_ordering(391) 00:11:23.064 fused_ordering(392) 00:11:23.064 fused_ordering(393) 00:11:23.064 fused_ordering(394) 00:11:23.064 fused_ordering(395) 00:11:23.064 fused_ordering(396) 00:11:23.064 fused_ordering(397) 00:11:23.064 fused_ordering(398) 00:11:23.064 fused_ordering(399) 00:11:23.064 fused_ordering(400) 00:11:23.064 fused_ordering(401) 00:11:23.064 fused_ordering(402) 00:11:23.064 fused_ordering(403) 00:11:23.064 fused_ordering(404) 00:11:23.064 fused_ordering(405) 00:11:23.064 fused_ordering(406) 00:11:23.064 fused_ordering(407) 00:11:23.064 fused_ordering(408) 00:11:23.064 fused_ordering(409) 00:11:23.064 fused_ordering(410) 00:11:23.325 fused_ordering(411) 00:11:23.325 fused_ordering(412) 00:11:23.325 fused_ordering(413) 00:11:23.325 fused_ordering(414) 00:11:23.325 fused_ordering(415) 00:11:23.325 fused_ordering(416) 00:11:23.325 fused_ordering(417) 00:11:23.325 fused_ordering(418) 00:11:23.325 fused_ordering(419) 00:11:23.325 fused_ordering(420) 00:11:23.325 fused_ordering(421) 00:11:23.325 fused_ordering(422) 00:11:23.325 fused_ordering(423) 00:11:23.325 fused_ordering(424) 00:11:23.325 fused_ordering(425) 00:11:23.325 fused_ordering(426) 00:11:23.325 fused_ordering(427) 00:11:23.325 fused_ordering(428) 00:11:23.325 fused_ordering(429) 00:11:23.325 fused_ordering(430) 00:11:23.325 fused_ordering(431) 00:11:23.325 fused_ordering(432) 00:11:23.325 fused_ordering(433) 00:11:23.325 fused_ordering(434) 00:11:23.325 fused_ordering(435) 00:11:23.325 fused_ordering(436) 00:11:23.325 fused_ordering(437) 00:11:23.325 fused_ordering(438) 00:11:23.325 fused_ordering(439) 00:11:23.325 fused_ordering(440) 00:11:23.325 fused_ordering(441) 00:11:23.325 fused_ordering(442) 00:11:23.325 fused_ordering(443) 00:11:23.325 fused_ordering(444) 00:11:23.325 fused_ordering(445) 00:11:23.325 fused_ordering(446) 00:11:23.325 fused_ordering(447) 00:11:23.325 fused_ordering(448) 00:11:23.325 fused_ordering(449) 00:11:23.325 fused_ordering(450) 00:11:23.325 fused_ordering(451) 00:11:23.325 fused_ordering(452) 00:11:23.325 fused_ordering(453) 00:11:23.325 fused_ordering(454) 00:11:23.325 fused_ordering(455) 00:11:23.325 fused_ordering(456) 00:11:23.325 fused_ordering(457) 00:11:23.325 fused_ordering(458) 00:11:23.325 fused_ordering(459) 00:11:23.325 fused_ordering(460) 00:11:23.325 fused_ordering(461) 00:11:23.325 fused_ordering(462) 00:11:23.325 fused_ordering(463) 00:11:23.325 fused_ordering(464) 00:11:23.325 fused_ordering(465) 00:11:23.325 fused_ordering(466) 00:11:23.325 fused_ordering(467) 00:11:23.325 fused_ordering(468) 00:11:23.325 fused_ordering(469) 00:11:23.325 fused_ordering(470) 00:11:23.325 fused_ordering(471) 00:11:23.325 fused_ordering(472) 00:11:23.325 fused_ordering(473) 00:11:23.325 fused_ordering(474) 00:11:23.325 fused_ordering(475) 00:11:23.325 fused_ordering(476) 00:11:23.325 fused_ordering(477) 00:11:23.325 fused_ordering(478) 00:11:23.325 fused_ordering(479) 00:11:23.325 fused_ordering(480) 00:11:23.325 fused_ordering(481) 00:11:23.325 fused_ordering(482) 00:11:23.325 fused_ordering(483) 00:11:23.325 fused_ordering(484) 00:11:23.325 fused_ordering(485) 00:11:23.325 fused_ordering(486) 00:11:23.325 fused_ordering(487) 00:11:23.325 fused_ordering(488) 00:11:23.325 fused_ordering(489) 00:11:23.325 fused_ordering(490) 00:11:23.325 fused_ordering(491) 00:11:23.325 fused_ordering(492) 00:11:23.325 fused_ordering(493) 00:11:23.325 fused_ordering(494) 00:11:23.325 fused_ordering(495) 00:11:23.325 fused_ordering(496) 00:11:23.325 fused_ordering(497) 00:11:23.325 fused_ordering(498) 00:11:23.325 fused_ordering(499) 00:11:23.325 fused_ordering(500) 00:11:23.325 fused_ordering(501) 00:11:23.325 fused_ordering(502) 00:11:23.325 fused_ordering(503) 00:11:23.325 fused_ordering(504) 00:11:23.325 fused_ordering(505) 00:11:23.325 fused_ordering(506) 00:11:23.325 fused_ordering(507) 00:11:23.325 fused_ordering(508) 00:11:23.325 fused_ordering(509) 00:11:23.325 fused_ordering(510) 00:11:23.325 fused_ordering(511) 00:11:23.325 fused_ordering(512) 00:11:23.325 fused_ordering(513) 00:11:23.325 fused_ordering(514) 00:11:23.325 fused_ordering(515) 00:11:23.325 fused_ordering(516) 00:11:23.325 fused_ordering(517) 00:11:23.325 fused_ordering(518) 00:11:23.325 fused_ordering(519) 00:11:23.325 fused_ordering(520) 00:11:23.325 fused_ordering(521) 00:11:23.325 fused_ordering(522) 00:11:23.325 fused_ordering(523) 00:11:23.325 fused_ordering(524) 00:11:23.325 fused_ordering(525) 00:11:23.325 fused_ordering(526) 00:11:23.325 fused_ordering(527) 00:11:23.325 fused_ordering(528) 00:11:23.325 fused_ordering(529) 00:11:23.325 fused_ordering(530) 00:11:23.325 fused_ordering(531) 00:11:23.325 fused_ordering(532) 00:11:23.325 fused_ordering(533) 00:11:23.325 fused_ordering(534) 00:11:23.325 fused_ordering(535) 00:11:23.325 fused_ordering(536) 00:11:23.325 fused_ordering(537) 00:11:23.325 fused_ordering(538) 00:11:23.325 fused_ordering(539) 00:11:23.325 fused_ordering(540) 00:11:23.325 fused_ordering(541) 00:11:23.325 fused_ordering(542) 00:11:23.325 fused_ordering(543) 00:11:23.325 fused_ordering(544) 00:11:23.325 fused_ordering(545) 00:11:23.325 fused_ordering(546) 00:11:23.325 fused_ordering(547) 00:11:23.325 fused_ordering(548) 00:11:23.325 fused_ordering(549) 00:11:23.325 fused_ordering(550) 00:11:23.325 fused_ordering(551) 00:11:23.325 fused_ordering(552) 00:11:23.325 fused_ordering(553) 00:11:23.325 fused_ordering(554) 00:11:23.325 fused_ordering(555) 00:11:23.325 fused_ordering(556) 00:11:23.325 fused_ordering(557) 00:11:23.325 fused_ordering(558) 00:11:23.325 fused_ordering(559) 00:11:23.325 fused_ordering(560) 00:11:23.325 fused_ordering(561) 00:11:23.325 fused_ordering(562) 00:11:23.325 fused_ordering(563) 00:11:23.325 fused_ordering(564) 00:11:23.325 fused_ordering(565) 00:11:23.325 fused_ordering(566) 00:11:23.325 fused_ordering(567) 00:11:23.325 fused_ordering(568) 00:11:23.325 fused_ordering(569) 00:11:23.325 fused_ordering(570) 00:11:23.325 fused_ordering(571) 00:11:23.325 fused_ordering(572) 00:11:23.325 fused_ordering(573) 00:11:23.325 fused_ordering(574) 00:11:23.325 fused_ordering(575) 00:11:23.325 fused_ordering(576) 00:11:23.325 fused_ordering(577) 00:11:23.325 fused_ordering(578) 00:11:23.325 fused_ordering(579) 00:11:23.325 fused_ordering(580) 00:11:23.325 fused_ordering(581) 00:11:23.325 fused_ordering(582) 00:11:23.325 fused_ordering(583) 00:11:23.325 fused_ordering(584) 00:11:23.325 fused_ordering(585) 00:11:23.325 fused_ordering(586) 00:11:23.325 fused_ordering(587) 00:11:23.325 fused_ordering(588) 00:11:23.325 fused_ordering(589) 00:11:23.326 fused_ordering(590) 00:11:23.326 fused_ordering(591) 00:11:23.326 fused_ordering(592) 00:11:23.326 fused_ordering(593) 00:11:23.326 fused_ordering(594) 00:11:23.326 fused_ordering(595) 00:11:23.326 fused_ordering(596) 00:11:23.326 fused_ordering(597) 00:11:23.326 fused_ordering(598) 00:11:23.326 fused_ordering(599) 00:11:23.326 fused_ordering(600) 00:11:23.326 fused_ordering(601) 00:11:23.326 fused_ordering(602) 00:11:23.326 fused_ordering(603) 00:11:23.326 fused_ordering(604) 00:11:23.326 fused_ordering(605) 00:11:23.326 fused_ordering(606) 00:11:23.326 fused_ordering(607) 00:11:23.326 fused_ordering(608) 00:11:23.326 fused_ordering(609) 00:11:23.326 fused_ordering(610) 00:11:23.326 fused_ordering(611) 00:11:23.326 fused_ordering(612) 00:11:23.326 fused_ordering(613) 00:11:23.326 fused_ordering(614) 00:11:23.326 fused_ordering(615) 00:11:23.898 fused_ordering(616) 00:11:23.898 fused_ordering(617) 00:11:23.898 fused_ordering(618) 00:11:23.898 fused_ordering(619) 00:11:23.898 fused_ordering(620) 00:11:23.898 fused_ordering(621) 00:11:23.898 fused_ordering(622) 00:11:23.898 fused_ordering(623) 00:11:23.898 fused_ordering(624) 00:11:23.898 fused_ordering(625) 00:11:23.898 fused_ordering(626) 00:11:23.898 fused_ordering(627) 00:11:23.898 fused_ordering(628) 00:11:23.898 fused_ordering(629) 00:11:23.898 fused_ordering(630) 00:11:23.898 fused_ordering(631) 00:11:23.898 fused_ordering(632) 00:11:23.898 fused_ordering(633) 00:11:23.898 fused_ordering(634) 00:11:23.898 fused_ordering(635) 00:11:23.898 fused_ordering(636) 00:11:23.898 fused_ordering(637) 00:11:23.898 fused_ordering(638) 00:11:23.898 fused_ordering(639) 00:11:23.898 fused_ordering(640) 00:11:23.898 fused_ordering(641) 00:11:23.898 fused_ordering(642) 00:11:23.898 fused_ordering(643) 00:11:23.898 fused_ordering(644) 00:11:23.898 fused_ordering(645) 00:11:23.898 fused_ordering(646) 00:11:23.898 fused_ordering(647) 00:11:23.898 fused_ordering(648) 00:11:23.898 fused_ordering(649) 00:11:23.898 fused_ordering(650) 00:11:23.898 fused_ordering(651) 00:11:23.898 fused_ordering(652) 00:11:23.898 fused_ordering(653) 00:11:23.898 fused_ordering(654) 00:11:23.898 fused_ordering(655) 00:11:23.898 fused_ordering(656) 00:11:23.898 fused_ordering(657) 00:11:23.898 fused_ordering(658) 00:11:23.898 fused_ordering(659) 00:11:23.898 fused_ordering(660) 00:11:23.898 fused_ordering(661) 00:11:23.898 fused_ordering(662) 00:11:23.898 fused_ordering(663) 00:11:23.898 fused_ordering(664) 00:11:23.898 fused_ordering(665) 00:11:23.898 fused_ordering(666) 00:11:23.898 fused_ordering(667) 00:11:23.898 fused_ordering(668) 00:11:23.898 fused_ordering(669) 00:11:23.898 fused_ordering(670) 00:11:23.898 fused_ordering(671) 00:11:23.898 fused_ordering(672) 00:11:23.898 fused_ordering(673) 00:11:23.898 fused_ordering(674) 00:11:23.898 fused_ordering(675) 00:11:23.898 fused_ordering(676) 00:11:23.898 fused_ordering(677) 00:11:23.898 fused_ordering(678) 00:11:23.898 fused_ordering(679) 00:11:23.898 fused_ordering(680) 00:11:23.898 fused_ordering(681) 00:11:23.898 fused_ordering(682) 00:11:23.898 fused_ordering(683) 00:11:23.898 fused_ordering(684) 00:11:23.898 fused_ordering(685) 00:11:23.898 fused_ordering(686) 00:11:23.898 fused_ordering(687) 00:11:23.898 fused_ordering(688) 00:11:23.898 fused_ordering(689) 00:11:23.898 fused_ordering(690) 00:11:23.898 fused_ordering(691) 00:11:23.898 fused_ordering(692) 00:11:23.898 fused_ordering(693) 00:11:23.898 fused_ordering(694) 00:11:23.898 fused_ordering(695) 00:11:23.898 fused_ordering(696) 00:11:23.898 fused_ordering(697) 00:11:23.898 fused_ordering(698) 00:11:23.898 fused_ordering(699) 00:11:23.898 fused_ordering(700) 00:11:23.898 fused_ordering(701) 00:11:23.898 fused_ordering(702) 00:11:23.898 fused_ordering(703) 00:11:23.898 fused_ordering(704) 00:11:23.898 fused_ordering(705) 00:11:23.898 fused_ordering(706) 00:11:23.898 fused_ordering(707) 00:11:23.898 fused_ordering(708) 00:11:23.898 fused_ordering(709) 00:11:23.898 fused_ordering(710) 00:11:23.898 fused_ordering(711) 00:11:23.898 fused_ordering(712) 00:11:23.898 fused_ordering(713) 00:11:23.898 fused_ordering(714) 00:11:23.898 fused_ordering(715) 00:11:23.898 fused_ordering(716) 00:11:23.898 fused_ordering(717) 00:11:23.898 fused_ordering(718) 00:11:23.898 fused_ordering(719) 00:11:23.898 fused_ordering(720) 00:11:23.898 fused_ordering(721) 00:11:23.898 fused_ordering(722) 00:11:23.898 fused_ordering(723) 00:11:23.898 fused_ordering(724) 00:11:23.898 fused_ordering(725) 00:11:23.898 fused_ordering(726) 00:11:23.898 fused_ordering(727) 00:11:23.898 fused_ordering(728) 00:11:23.898 fused_ordering(729) 00:11:23.898 fused_ordering(730) 00:11:23.898 fused_ordering(731) 00:11:23.898 fused_ordering(732) 00:11:23.898 fused_ordering(733) 00:11:23.898 fused_ordering(734) 00:11:23.898 fused_ordering(735) 00:11:23.898 fused_ordering(736) 00:11:23.898 fused_ordering(737) 00:11:23.898 fused_ordering(738) 00:11:23.898 fused_ordering(739) 00:11:23.898 fused_ordering(740) 00:11:23.898 fused_ordering(741) 00:11:23.898 fused_ordering(742) 00:11:23.898 fused_ordering(743) 00:11:23.898 fused_ordering(744) 00:11:23.898 fused_ordering(745) 00:11:23.898 fused_ordering(746) 00:11:23.898 fused_ordering(747) 00:11:23.898 fused_ordering(748) 00:11:23.898 fused_ordering(749) 00:11:23.898 fused_ordering(750) 00:11:23.898 fused_ordering(751) 00:11:23.898 fused_ordering(752) 00:11:23.898 fused_ordering(753) 00:11:23.898 fused_ordering(754) 00:11:23.898 fused_ordering(755) 00:11:23.898 fused_ordering(756) 00:11:23.898 fused_ordering(757) 00:11:23.898 fused_ordering(758) 00:11:23.898 fused_ordering(759) 00:11:23.898 fused_ordering(760) 00:11:23.898 fused_ordering(761) 00:11:23.898 fused_ordering(762) 00:11:23.898 fused_ordering(763) 00:11:23.898 fused_ordering(764) 00:11:23.898 fused_ordering(765) 00:11:23.898 fused_ordering(766) 00:11:23.898 fused_ordering(767) 00:11:23.898 fused_ordering(768) 00:11:23.898 fused_ordering(769) 00:11:23.898 fused_ordering(770) 00:11:23.898 fused_ordering(771) 00:11:23.898 fused_ordering(772) 00:11:23.898 fused_ordering(773) 00:11:23.898 fused_ordering(774) 00:11:23.898 fused_ordering(775) 00:11:23.898 fused_ordering(776) 00:11:23.898 fused_ordering(777) 00:11:23.898 fused_ordering(778) 00:11:23.898 fused_ordering(779) 00:11:23.898 fused_ordering(780) 00:11:23.898 fused_ordering(781) 00:11:23.898 fused_ordering(782) 00:11:23.898 fused_ordering(783) 00:11:23.898 fused_ordering(784) 00:11:23.898 fused_ordering(785) 00:11:23.898 fused_ordering(786) 00:11:23.899 fused_ordering(787) 00:11:23.899 fused_ordering(788) 00:11:23.899 fused_ordering(789) 00:11:23.899 fused_ordering(790) 00:11:23.899 fused_ordering(791) 00:11:23.899 fused_ordering(792) 00:11:23.899 fused_ordering(793) 00:11:23.899 fused_ordering(794) 00:11:23.899 fused_ordering(795) 00:11:23.899 fused_ordering(796) 00:11:23.899 fused_ordering(797) 00:11:23.899 fused_ordering(798) 00:11:23.899 fused_ordering(799) 00:11:23.899 fused_ordering(800) 00:11:23.899 fused_ordering(801) 00:11:23.899 fused_ordering(802) 00:11:23.899 fused_ordering(803) 00:11:23.899 fused_ordering(804) 00:11:23.899 fused_ordering(805) 00:11:23.899 fused_ordering(806) 00:11:23.899 fused_ordering(807) 00:11:23.899 fused_ordering(808) 00:11:23.899 fused_ordering(809) 00:11:23.899 fused_ordering(810) 00:11:23.899 fused_ordering(811) 00:11:23.899 fused_ordering(812) 00:11:23.899 fused_ordering(813) 00:11:23.899 fused_ordering(814) 00:11:23.899 fused_ordering(815) 00:11:23.899 fused_ordering(816) 00:11:23.899 fused_ordering(817) 00:11:23.899 fused_ordering(818) 00:11:23.899 fused_ordering(819) 00:11:23.899 fused_ordering(820) 00:11:24.511 fused_ordering(821) 00:11:24.511 fused_ordering(822) 00:11:24.511 fused_ordering(823) 00:11:24.511 fused_ordering(824) 00:11:24.511 fused_ordering(825) 00:11:24.511 fused_ordering(826) 00:11:24.511 fused_ordering(827) 00:11:24.511 fused_ordering(828) 00:11:24.511 fused_ordering(829) 00:11:24.511 fused_ordering(830) 00:11:24.511 fused_ordering(831) 00:11:24.511 fused_ordering(832) 00:11:24.511 fused_ordering(833) 00:11:24.511 fused_ordering(834) 00:11:24.511 fused_ordering(835) 00:11:24.511 fused_ordering(836) 00:11:24.511 fused_ordering(837) 00:11:24.511 fused_ordering(838) 00:11:24.511 fused_ordering(839) 00:11:24.511 fused_ordering(840) 00:11:24.511 fused_ordering(841) 00:11:24.511 fused_ordering(842) 00:11:24.511 fused_ordering(843) 00:11:24.511 fused_ordering(844) 00:11:24.511 fused_ordering(845) 00:11:24.511 fused_ordering(846) 00:11:24.511 fused_ordering(847) 00:11:24.511 fused_ordering(848) 00:11:24.511 fused_ordering(849) 00:11:24.511 fused_ordering(850) 00:11:24.511 fused_ordering(851) 00:11:24.511 fused_ordering(852) 00:11:24.511 fused_ordering(853) 00:11:24.511 fused_ordering(854) 00:11:24.511 fused_ordering(855) 00:11:24.511 fused_ordering(856) 00:11:24.511 fused_ordering(857) 00:11:24.511 fused_ordering(858) 00:11:24.511 fused_ordering(859) 00:11:24.511 fused_ordering(860) 00:11:24.511 fused_ordering(861) 00:11:24.511 fused_ordering(862) 00:11:24.511 fused_ordering(863) 00:11:24.511 fused_ordering(864) 00:11:24.511 fused_ordering(865) 00:11:24.511 fused_ordering(866) 00:11:24.511 fused_ordering(867) 00:11:24.511 fused_ordering(868) 00:11:24.511 fused_ordering(869) 00:11:24.511 fused_ordering(870) 00:11:24.511 fused_ordering(871) 00:11:24.511 fused_ordering(872) 00:11:24.511 fused_ordering(873) 00:11:24.511 fused_ordering(874) 00:11:24.511 fused_ordering(875) 00:11:24.511 fused_ordering(876) 00:11:24.511 fused_ordering(877) 00:11:24.511 fused_ordering(878) 00:11:24.511 fused_ordering(879) 00:11:24.511 fused_ordering(880) 00:11:24.511 fused_ordering(881) 00:11:24.511 fused_ordering(882) 00:11:24.511 fused_ordering(883) 00:11:24.511 fused_ordering(884) 00:11:24.511 fused_ordering(885) 00:11:24.511 fused_ordering(886) 00:11:24.511 fused_ordering(887) 00:11:24.511 fused_ordering(888) 00:11:24.511 fused_ordering(889) 00:11:24.511 fused_ordering(890) 00:11:24.511 fused_ordering(891) 00:11:24.511 fused_ordering(892) 00:11:24.511 fused_ordering(893) 00:11:24.511 fused_ordering(894) 00:11:24.511 fused_ordering(895) 00:11:24.511 fused_ordering(896) 00:11:24.511 fused_ordering(897) 00:11:24.511 fused_ordering(898) 00:11:24.511 fused_ordering(899) 00:11:24.511 fused_ordering(900) 00:11:24.511 fused_ordering(901) 00:11:24.511 fused_ordering(902) 00:11:24.511 fused_ordering(903) 00:11:24.511 fused_ordering(904) 00:11:24.511 fused_ordering(905) 00:11:24.511 fused_ordering(906) 00:11:24.511 fused_ordering(907) 00:11:24.511 fused_ordering(908) 00:11:24.511 fused_ordering(909) 00:11:24.511 fused_ordering(910) 00:11:24.511 fused_ordering(911) 00:11:24.511 fused_ordering(912) 00:11:24.511 fused_ordering(913) 00:11:24.511 fused_ordering(914) 00:11:24.511 fused_ordering(915) 00:11:24.511 fused_ordering(916) 00:11:24.511 fused_ordering(917) 00:11:24.511 fused_ordering(918) 00:11:24.511 fused_ordering(919) 00:11:24.511 fused_ordering(920) 00:11:24.511 fused_ordering(921) 00:11:24.511 fused_ordering(922) 00:11:24.511 fused_ordering(923) 00:11:24.511 fused_ordering(924) 00:11:24.511 fused_ordering(925) 00:11:24.511 fused_ordering(926) 00:11:24.511 fused_ordering(927) 00:11:24.511 fused_ordering(928) 00:11:24.511 fused_ordering(929) 00:11:24.511 fused_ordering(930) 00:11:24.511 fused_ordering(931) 00:11:24.511 fused_ordering(932) 00:11:24.511 fused_ordering(933) 00:11:24.511 fused_ordering(934) 00:11:24.511 fused_ordering(935) 00:11:24.511 fused_ordering(936) 00:11:24.511 fused_ordering(937) 00:11:24.511 fused_ordering(938) 00:11:24.511 fused_ordering(939) 00:11:24.511 fused_ordering(940) 00:11:24.511 fused_ordering(941) 00:11:24.511 fused_ordering(942) 00:11:24.511 fused_ordering(943) 00:11:24.511 fused_ordering(944) 00:11:24.511 fused_ordering(945) 00:11:24.511 fused_ordering(946) 00:11:24.511 fused_ordering(947) 00:11:24.511 fused_ordering(948) 00:11:24.511 fused_ordering(949) 00:11:24.511 fused_ordering(950) 00:11:24.511 fused_ordering(951) 00:11:24.511 fused_ordering(952) 00:11:24.511 fused_ordering(953) 00:11:24.511 fused_ordering(954) 00:11:24.511 fused_ordering(955) 00:11:24.511 fused_ordering(956) 00:11:24.511 fused_ordering(957) 00:11:24.511 fused_ordering(958) 00:11:24.511 fused_ordering(959) 00:11:24.511 fused_ordering(960) 00:11:24.511 fused_ordering(961) 00:11:24.511 fused_ordering(962) 00:11:24.511 fused_ordering(963) 00:11:24.512 fused_ordering(964) 00:11:24.512 fused_ordering(965) 00:11:24.512 fused_ordering(966) 00:11:24.512 fused_ordering(967) 00:11:24.512 fused_ordering(968) 00:11:24.512 fused_ordering(969) 00:11:24.512 fused_ordering(970) 00:11:24.512 fused_ordering(971) 00:11:24.512 fused_ordering(972) 00:11:24.512 fused_ordering(973) 00:11:24.512 fused_ordering(974) 00:11:24.512 fused_ordering(975) 00:11:24.512 fused_ordering(976) 00:11:24.512 fused_ordering(977) 00:11:24.512 fused_ordering(978) 00:11:24.512 fused_ordering(979) 00:11:24.512 fused_ordering(980) 00:11:24.512 fused_ordering(981) 00:11:24.512 fused_ordering(982) 00:11:24.512 fused_ordering(983) 00:11:24.512 fused_ordering(984) 00:11:24.512 fused_ordering(985) 00:11:24.512 fused_ordering(986) 00:11:24.512 fused_ordering(987) 00:11:24.512 fused_ordering(988) 00:11:24.512 fused_ordering(989) 00:11:24.512 fused_ordering(990) 00:11:24.512 fused_ordering(991) 00:11:24.512 fused_ordering(992) 00:11:24.512 fused_ordering(993) 00:11:24.512 fused_ordering(994) 00:11:24.512 fused_ordering(995) 00:11:24.512 fused_ordering(996) 00:11:24.512 fused_ordering(997) 00:11:24.512 fused_ordering(998) 00:11:24.512 fused_ordering(999) 00:11:24.512 fused_ordering(1000) 00:11:24.512 fused_ordering(1001) 00:11:24.512 fused_ordering(1002) 00:11:24.512 fused_ordering(1003) 00:11:24.512 fused_ordering(1004) 00:11:24.512 fused_ordering(1005) 00:11:24.512 fused_ordering(1006) 00:11:24.512 fused_ordering(1007) 00:11:24.512 fused_ordering(1008) 00:11:24.512 fused_ordering(1009) 00:11:24.512 fused_ordering(1010) 00:11:24.512 fused_ordering(1011) 00:11:24.512 fused_ordering(1012) 00:11:24.512 fused_ordering(1013) 00:11:24.512 fused_ordering(1014) 00:11:24.512 fused_ordering(1015) 00:11:24.512 fused_ordering(1016) 00:11:24.512 fused_ordering(1017) 00:11:24.512 fused_ordering(1018) 00:11:24.512 fused_ordering(1019) 00:11:24.512 fused_ordering(1020) 00:11:24.512 fused_ordering(1021) 00:11:24.512 fused_ordering(1022) 00:11:24.512 fused_ordering(1023) 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:24.512 rmmod nvme_tcp 00:11:24.512 rmmod nvme_fabrics 00:11:24.512 rmmod nvme_keyring 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1842365 ']' 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1842365 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1842365 ']' 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1842365 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1842365 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1842365' 00:11:24.512 killing process with pid 1842365 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1842365 00:11:24.512 21:01:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1842365 00:11:24.773 21:01:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:24.773 21:01:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:24.773 21:01:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:24.773 21:01:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:24.773 21:01:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:24.773 21:01:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.773 21:01:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:24.773 21:01:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.687 21:01:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:26.687 00:11:26.687 real 0m13.944s 00:11:26.687 user 0m7.088s 00:11:26.687 sys 0m7.562s 00:11:26.687 21:01:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:26.687 21:01:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:26.687 ************************************ 00:11:26.687 END TEST nvmf_fused_ordering 00:11:26.687 ************************************ 00:11:26.949 21:01:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:26.949 21:01:54 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:26.949 21:01:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:26.949 21:01:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.949 21:01:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:26.949 ************************************ 00:11:26.949 START TEST nvmf_delete_subsystem 00:11:26.949 ************************************ 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:26.949 * Looking for test storage... 00:11:26.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:26.949 21:01:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.090 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.090 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:35.090 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:35.090 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:35.091 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:35.091 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:35.091 Found net devices under 0000:31:00.0: cvl_0_0 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:35.091 Found net devices under 0000:31:00.1: cvl_0_1 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:35.091 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:35.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:11:35.353 00:11:35.353 --- 10.0.0.2 ping statistics --- 00:11:35.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.353 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:11:35.353 00:11:35.353 --- 10.0.0.1 ping statistics --- 00:11:35.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.353 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1847724 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1847724 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1847724 ']' 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:35.353 21:02:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.353 [2024-07-15 21:02:02.541818] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:11:35.353 [2024-07-15 21:02:02.541889] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.353 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.353 [2024-07-15 21:02:02.621212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:35.614 [2024-07-15 21:02:02.695315] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.614 [2024-07-15 21:02:02.695366] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.614 [2024-07-15 21:02:02.695375] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.614 [2024-07-15 21:02:02.695382] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.614 [2024-07-15 21:02:02.695387] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.614 [2024-07-15 21:02:02.695531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.614 [2024-07-15 21:02:02.695649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.184 [2024-07-15 21:02:03.374985] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.184 [2024-07-15 21:02:03.391124] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.184 NULL1 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.184 Delay0 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1847813 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:36.184 21:02:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:36.184 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.444 [2024-07-15 21:02:03.475747] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:38.354 21:02:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:38.354 21:02:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.354 21:02:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.614 Write completed with error (sct=0, sc=8) 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 starting I/O failed: -6 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 Write completed with error (sct=0, sc=8) 00:11:38.614 Write completed with error (sct=0, sc=8) 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 starting I/O failed: -6 00:11:38.614 Write completed with error (sct=0, sc=8) 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 Write completed with error (sct=0, sc=8) 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 starting I/O failed: -6 00:11:38.614 Write completed with error (sct=0, sc=8) 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 Write completed with error (sct=0, sc=8) 00:11:38.614 Write completed with error (sct=0, sc=8) 00:11:38.614 starting I/O failed: -6 00:11:38.614 Write completed with error (sct=0, sc=8) 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 starting I/O failed: -6 00:11:38.614 Write completed with error (sct=0, sc=8) 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 starting I/O failed: -6 00:11:38.614 Write completed with error (sct=0, sc=8) 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 starting I/O failed: -6 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 starting I/O failed: -6 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.614 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 starting I/O failed: -6 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 starting I/O failed: -6 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 [2024-07-15 21:02:05.679134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adcb50 is same with the state(5) to be set 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 starting I/O failed: -6 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 starting I/O failed: -6 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 starting I/O failed: -6 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 starting I/O failed: -6 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 starting I/O failed: -6 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 starting I/O failed: -6 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 starting I/O failed: -6 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 starting I/O failed: -6 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 starting I/O failed: -6 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 starting I/O failed: -6 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 starting I/O failed: -6 00:11:38.615 [2024-07-15 21:02:05.684430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f82e000d4b0 is same with the state(5) to be set 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Write completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:38.615 Read completed with error (sct=0, sc=8) 00:11:39.648 [2024-07-15 21:02:06.654750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abb6e0 is same with the state(5) to be set 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 [2024-07-15 21:02:06.682593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adc6c0 is same with the state(5) to be set 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 [2024-07-15 21:02:06.682995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adcea0 is same with the state(5) to be set 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 [2024-07-15 21:02:06.686307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f82e000d020 is same with the state(5) to be set 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Write completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 Read completed with error (sct=0, sc=8) 00:11:39.648 [2024-07-15 21:02:06.686387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f82e000d800 is same with the state(5) to be set 00:11:39.648 Initializing NVMe Controllers 00:11:39.648 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:39.648 Controller IO queue size 128, less than required. 00:11:39.648 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:39.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:39.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:39.648 Initialization complete. Launching workers. 00:11:39.648 ======================================================== 00:11:39.648 Latency(us) 00:11:39.648 Device Information : IOPS MiB/s Average min max 00:11:39.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 160.98 0.08 912947.97 206.20 1005314.76 00:11:39.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.46 0.08 904708.59 288.20 1009312.75 00:11:39.648 ======================================================== 00:11:39.648 Total : 326.44 0.16 908771.68 206.20 1009312.75 00:11:39.648 00:11:39.648 [2024-07-15 21:02:06.686845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abb6e0 (9): Bad file descriptor 00:11:39.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:39.648 21:02:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.648 21:02:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:39.648 21:02:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1847813 00:11:39.648 21:02:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1847813 00:11:39.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1847813) - No such process 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1847813 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1847813 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1847813 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.934 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:39.934 [2024-07-15 21:02:07.219937] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.193 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.193 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.193 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.193 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:40.193 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.194 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1848703 00:11:40.194 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:40.194 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:40.194 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1848703 00:11:40.194 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:40.194 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.194 [2024-07-15 21:02:07.287089] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:40.763 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:40.763 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1848703 00:11:40.763 21:02:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:41.023 21:02:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:41.023 21:02:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1848703 00:11:41.023 21:02:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:41.592 21:02:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:41.592 21:02:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1848703 00:11:41.592 21:02:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:42.162 21:02:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:42.162 21:02:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1848703 00:11:42.162 21:02:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:42.731 21:02:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:42.731 21:02:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1848703 00:11:42.731 21:02:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:42.991 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:42.991 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1848703 00:11:42.991 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:43.251 Initializing NVMe Controllers 00:11:43.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:43.251 Controller IO queue size 128, less than required. 00:11:43.251 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:43.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:43.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:43.251 Initialization complete. Launching workers. 00:11:43.251 ======================================================== 00:11:43.251 Latency(us) 00:11:43.251 Device Information : IOPS MiB/s Average min max 00:11:43.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002033.22 1000182.23 1041879.38 00:11:43.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002818.52 1000211.73 1009173.60 00:11:43.251 ======================================================== 00:11:43.251 Total : 256.00 0.12 1002425.87 1000182.23 1041879.38 00:11:43.251 00:11:43.512 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:43.512 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1848703 00:11:43.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1848703) - No such process 00:11:43.512 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1848703 00:11:43.512 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:43.512 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:43.512 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:43.512 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:43.512 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:43.512 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:43.512 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:43.512 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:43.512 rmmod nvme_tcp 00:11:43.772 rmmod nvme_fabrics 00:11:43.772 rmmod nvme_keyring 00:11:43.772 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:43.772 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:43.772 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:43.772 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1847724 ']' 00:11:43.772 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1847724 00:11:43.772 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1847724 ']' 00:11:43.772 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1847724 00:11:43.772 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:43.772 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:43.772 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1847724 00:11:43.772 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:43.772 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:43.772 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1847724' 00:11:43.772 killing process with pid 1847724 00:11:43.772 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1847724 00:11:43.772 21:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1847724 00:11:43.772 21:02:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:43.772 21:02:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:43.772 21:02:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:43.772 21:02:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:43.772 21:02:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:43.772 21:02:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.772 21:02:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:43.772 21:02:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.330 21:02:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:46.330 00:11:46.330 real 0m19.057s 00:11:46.330 user 0m31.147s 00:11:46.330 sys 0m7.075s 00:11:46.330 21:02:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:46.330 21:02:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:46.330 ************************************ 00:11:46.330 END TEST nvmf_delete_subsystem 00:11:46.330 ************************************ 00:11:46.330 21:02:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:46.330 21:02:13 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:46.330 21:02:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:46.330 21:02:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.330 21:02:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:46.330 ************************************ 00:11:46.330 START TEST nvmf_ns_masking 00:11:46.330 ************************************ 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:46.330 * Looking for test storage... 00:11:46.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=fc36236d-d6e5-4719-8456-5bd0cf04e20e 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=46684cb9-8006-4619-85f2-f61a9d43477a 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=4dffa114-5807-4cb5-b5f5-23045100d46d 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:46.330 21:02:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:46.331 21:02:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:54.469 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:54.469 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:54.469 Found net devices under 0000:31:00.0: cvl_0_0 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:54.469 Found net devices under 0000:31:00.1: cvl_0_1 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:54.469 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:54.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:11:54.470 00:11:54.470 --- 10.0.0.2 ping statistics --- 00:11:54.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.470 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:54.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:11:54.470 00:11:54.470 --- 10.0.0.1 ping statistics --- 00:11:54.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.470 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1854122 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1854122 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1854122 ']' 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:54.470 21:02:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:54.470 [2024-07-15 21:02:21.640574] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:11:54.470 [2024-07-15 21:02:21.640634] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.470 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.470 [2024-07-15 21:02:21.717016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.731 [2024-07-15 21:02:21.781672] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.731 [2024-07-15 21:02:21.781711] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.731 [2024-07-15 21:02:21.781719] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.731 [2024-07-15 21:02:21.781726] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.731 [2024-07-15 21:02:21.781732] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.731 [2024-07-15 21:02:21.781757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.300 21:02:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:55.300 21:02:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:55.300 21:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:55.300 21:02:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:55.300 21:02:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:55.300 21:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.300 21:02:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:55.561 [2024-07-15 21:02:22.633120] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:55.561 21:02:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:55.561 21:02:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:55.561 21:02:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:55.561 Malloc1 00:11:55.821 21:02:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:55.821 Malloc2 00:11:55.821 21:02:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:56.081 21:02:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:56.081 21:02:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.341 [2024-07-15 21:02:23.493489] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.341 21:02:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:56.341 21:02:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4dffa114-5807-4cb5-b5f5-23045100d46d -a 10.0.0.2 -s 4420 -i 4 00:11:56.601 21:02:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:56.601 21:02:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:56.601 21:02:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:56.601 21:02:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:56.601 21:02:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:58.512 21:02:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:58.512 21:02:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:58.512 21:02:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:58.512 21:02:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:58.512 21:02:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:58.512 21:02:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:58.512 21:02:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:58.512 21:02:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:58.512 21:02:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:58.512 21:02:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:58.512 21:02:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:58.512 21:02:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:58.512 21:02:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:58.772 [ 0]:0x1 00:11:58.772 21:02:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:58.772 21:02:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:58.772 21:02:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=74ddcfca4d4346a1a0856b9917f4a61e 00:11:58.772 21:02:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 74ddcfca4d4346a1a0856b9917f4a61e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:58.772 21:02:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:58.772 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:58.772 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:58.772 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:58.772 [ 0]:0x1 00:11:58.772 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:58.772 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:59.032 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=74ddcfca4d4346a1a0856b9917f4a61e 00:11:59.032 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 74ddcfca4d4346a1a0856b9917f4a61e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:59.032 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:59.032 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:59.032 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:59.032 [ 1]:0x2 00:11:59.032 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:59.032 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:59.032 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c80f64d73bbc477596e2934702d86200 00:11:59.032 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c80f64d73bbc477596e2934702d86200 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:59.032 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:59.032 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.032 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.292 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:59.292 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:59.292 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4dffa114-5807-4cb5-b5f5-23045100d46d -a 10.0.0.2 -s 4420 -i 4 00:11:59.551 21:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:59.551 21:02:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:59.551 21:02:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.551 21:02:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:59.551 21:02:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:59.551 21:02:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:02.087 21:02:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:02.088 21:02:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:02.088 21:02:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:02.088 [ 0]:0x2 00:12:02.088 21:02:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:02.088 21:02:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:02.088 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c80f64d73bbc477596e2934702d86200 00:12:02.088 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c80f64d73bbc477596e2934702d86200 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:02.088 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:02.088 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:02.088 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:02.088 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:02.088 [ 0]:0x1 00:12:02.088 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:02.088 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:02.088 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=74ddcfca4d4346a1a0856b9917f4a61e 00:12:02.088 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 74ddcfca4d4346a1a0856b9917f4a61e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:02.088 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:02.088 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:02.088 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:02.088 [ 1]:0x2 00:12:02.088 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:02.088 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:02.088 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c80f64d73bbc477596e2934702d86200 00:12:02.088 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c80f64d73bbc477596e2934702d86200 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:02.088 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:02.347 [ 0]:0x2 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c80f64d73bbc477596e2934702d86200 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c80f64d73bbc477596e2934702d86200 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:02.347 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.607 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:02.607 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:02.607 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4dffa114-5807-4cb5-b5f5-23045100d46d -a 10.0.0.2 -s 4420 -i 4 00:12:02.867 21:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:02.867 21:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:02.867 21:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.867 21:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:02.867 21:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:02.867 21:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:04.777 21:02:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:04.777 21:02:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:04.777 21:02:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.777 21:02:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:04.777 21:02:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.777 21:02:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:04.777 21:02:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:04.777 21:02:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:04.777 21:02:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:04.777 21:02:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:04.777 21:02:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:04.777 21:02:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:04.777 21:02:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:04.777 [ 0]:0x1 00:12:04.777 21:02:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:04.777 21:02:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:04.777 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=74ddcfca4d4346a1a0856b9917f4a61e 00:12:04.777 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 74ddcfca4d4346a1a0856b9917f4a61e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.777 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:04.777 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:04.777 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:04.777 [ 1]:0x2 00:12:04.777 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:04.777 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:05.039 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c80f64d73bbc477596e2934702d86200 00:12:05.039 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c80f64d73bbc477596e2934702d86200 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:05.039 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:05.039 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:05.039 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:05.039 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:05.039 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:05.039 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.039 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:05.039 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.039 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:05.039 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:05.040 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:05.040 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:05.040 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:05.040 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:05.040 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:05.040 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:05.040 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:05.040 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:05.040 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:05.040 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:05.040 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:05.040 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:05.040 [ 0]:0x2 00:12:05.040 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:05.040 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c80f64d73bbc477596e2934702d86200 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c80f64d73bbc477596e2934702d86200 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:05.303 [2024-07-15 21:02:32.499269] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:05.303 request: 00:12:05.303 { 00:12:05.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.303 "nsid": 2, 00:12:05.303 "host": "nqn.2016-06.io.spdk:host1", 00:12:05.303 "method": "nvmf_ns_remove_host", 00:12:05.303 "req_id": 1 00:12:05.303 } 00:12:05.303 Got JSON-RPC error response 00:12:05.303 response: 00:12:05.303 { 00:12:05.303 "code": -32602, 00:12:05.303 "message": "Invalid parameters" 00:12:05.303 } 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:05.303 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:05.564 [ 0]:0x2 00:12:05.564 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:05.564 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:05.564 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c80f64d73bbc477596e2934702d86200 00:12:05.564 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c80f64d73bbc477596e2934702d86200 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:05.564 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:05.564 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:05.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.564 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1856327 00:12:05.564 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.564 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:05.564 21:02:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1856327 /var/tmp/host.sock 00:12:05.564 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1856327 ']' 00:12:05.564 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:05.564 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:05.564 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:05.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:05.564 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:05.564 21:02:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:05.564 [2024-07-15 21:02:32.773961] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:12:05.564 [2024-07-15 21:02:32.774012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856327 ] 00:12:05.564 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.825 [2024-07-15 21:02:32.854974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.825 [2024-07-15 21:02:32.919547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.395 21:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:06.395 21:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:06.395 21:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.655 21:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:06.655 21:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid fc36236d-d6e5-4719-8456-5bd0cf04e20e 00:12:06.655 21:02:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:06.655 21:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FC36236DD6E5471984565BD0CF04E20E -i 00:12:06.916 21:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 46684cb9-8006-4619-85f2-f61a9d43477a 00:12:06.916 21:02:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:06.916 21:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 46684CB98006461985F2F61A9D43477A -i 00:12:06.916 21:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:07.177 21:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:07.177 21:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:07.177 21:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:07.747 nvme0n1 00:12:07.747 21:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:07.747 21:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:08.007 nvme1n2 00:12:08.007 21:02:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:08.007 21:02:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:08.007 21:02:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:08.007 21:02:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:08.007 21:02:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:08.267 21:02:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:08.267 21:02:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:08.267 21:02:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:08.267 21:02:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:08.527 21:02:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ fc36236d-d6e5-4719-8456-5bd0cf04e20e == \f\c\3\6\2\3\6\d\-\d\6\e\5\-\4\7\1\9\-\8\4\5\6\-\5\b\d\0\c\f\0\4\e\2\0\e ]] 00:12:08.527 21:02:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:08.527 21:02:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:08.527 21:02:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:08.527 21:02:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 46684cb9-8006-4619-85f2-f61a9d43477a == \4\6\6\8\4\c\b\9\-\8\0\0\6\-\4\6\1\9\-\8\5\f\2\-\f\6\1\a\9\d\4\3\4\7\7\a ]] 00:12:08.527 21:02:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1856327 00:12:08.527 21:02:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1856327 ']' 00:12:08.527 21:02:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1856327 00:12:08.527 21:02:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:08.527 21:02:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:08.527 21:02:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1856327 00:12:08.527 21:02:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:08.527 21:02:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:08.527 21:02:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1856327' 00:12:08.527 killing process with pid 1856327 00:12:08.527 21:02:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1856327 00:12:08.527 21:02:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1856327 00:12:08.787 21:02:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:09.047 rmmod nvme_tcp 00:12:09.047 rmmod nvme_fabrics 00:12:09.047 rmmod nvme_keyring 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1854122 ']' 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1854122 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1854122 ']' 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1854122 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1854122 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1854122' 00:12:09.047 killing process with pid 1854122 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1854122 00:12:09.047 21:02:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1854122 00:12:09.307 21:02:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:09.307 21:02:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:09.307 21:02:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:09.307 21:02:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:09.307 21:02:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:09.307 21:02:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.307 21:02:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.307 21:02:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.227 21:02:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:11.227 00:12:11.227 real 0m25.332s 00:12:11.227 user 0m24.725s 00:12:11.227 sys 0m7.996s 00:12:11.227 21:02:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:11.227 21:02:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:11.227 ************************************ 00:12:11.227 END TEST nvmf_ns_masking 00:12:11.227 ************************************ 00:12:11.488 21:02:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:11.488 21:02:38 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:11.488 21:02:38 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:11.488 21:02:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:11.488 21:02:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:11.488 21:02:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:11.488 ************************************ 00:12:11.488 START TEST nvmf_nvme_cli 00:12:11.488 ************************************ 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:11.488 * Looking for test storage... 00:12:11.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:11.488 21:02:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:19.693 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:19.693 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:19.693 Found net devices under 0000:31:00.0: cvl_0_0 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:19.693 Found net devices under 0000:31:00.1: cvl_0_1 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:19.693 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:19.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:12:19.694 00:12:19.694 --- 10.0.0.2 ping statistics --- 00:12:19.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.694 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:19.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:12:19.694 00:12:19.694 --- 10.0.0.1 ping statistics --- 00:12:19.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.694 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1861857 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1861857 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1861857 ']' 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:19.694 21:02:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:19.694 [2024-07-15 21:02:46.932697] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:12:19.694 [2024-07-15 21:02:46.932752] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.694 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.037 [2024-07-15 21:02:47.008291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.037 [2024-07-15 21:02:47.081375] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.037 [2024-07-15 21:02:47.081415] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.037 [2024-07-15 21:02:47.081423] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.037 [2024-07-15 21:02:47.081429] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.037 [2024-07-15 21:02:47.081435] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.037 [2024-07-15 21:02:47.081516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.037 [2024-07-15 21:02:47.081649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.037 [2024-07-15 21:02:47.081803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.037 [2024-07-15 21:02:47.081804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.664 [2024-07-15 21:02:47.752819] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.664 Malloc0 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.664 Malloc1 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.664 [2024-07-15 21:02:47.842747] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.664 21:02:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:12:20.924 00:12:20.924 Discovery Log Number of Records 2, Generation counter 2 00:12:20.924 =====Discovery Log Entry 0====== 00:12:20.924 trtype: tcp 00:12:20.924 adrfam: ipv4 00:12:20.924 subtype: current discovery subsystem 00:12:20.924 treq: not required 00:12:20.924 portid: 0 00:12:20.924 trsvcid: 4420 00:12:20.924 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:20.924 traddr: 10.0.0.2 00:12:20.924 eflags: explicit discovery connections, duplicate discovery information 00:12:20.924 sectype: none 00:12:20.924 =====Discovery Log Entry 1====== 00:12:20.924 trtype: tcp 00:12:20.924 adrfam: ipv4 00:12:20.924 subtype: nvme subsystem 00:12:20.924 treq: not required 00:12:20.924 portid: 0 00:12:20.924 trsvcid: 4420 00:12:20.924 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:20.924 traddr: 10.0.0.2 00:12:20.924 eflags: none 00:12:20.924 sectype: none 00:12:20.924 21:02:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:20.924 21:02:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:20.924 21:02:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:20.924 21:02:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:20.924 21:02:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:20.924 21:02:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:20.924 21:02:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:20.924 21:02:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:20.924 21:02:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:20.924 21:02:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:20.924 21:02:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.306 21:02:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:22.306 21:02:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:22.306 21:02:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.306 21:02:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:22.306 21:02:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:22.306 21:02:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:24.216 21:02:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:24.216 21:02:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:24.216 21:02:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:24.216 21:02:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:24.216 21:02:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.216 21:02:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:24.216 21:02:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:24.216 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:24.216 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:24.216 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:24.476 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:24.476 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:24.476 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:24.476 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:24.476 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:24.476 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:24.476 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:24.476 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:24.476 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:24.476 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:24.476 21:02:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:24.476 /dev/nvme0n1 ]] 00:12:24.476 21:02:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:24.476 21:02:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:24.476 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:24.476 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:24.476 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:24.736 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:24.736 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:24.736 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:24.736 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:24.736 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:24.736 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:24.736 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:24.736 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:24.736 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:24.736 21:02:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:24.736 21:02:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:24.736 21:02:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:24.996 rmmod nvme_tcp 00:12:24.996 rmmod nvme_fabrics 00:12:24.996 rmmod nvme_keyring 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1861857 ']' 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1861857 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1861857 ']' 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1861857 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1861857 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1861857' 00:12:24.996 killing process with pid 1861857 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1861857 00:12:24.996 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1861857 00:12:25.286 21:02:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:25.286 21:02:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:25.286 21:02:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:25.286 21:02:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:25.286 21:02:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:25.286 21:02:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.286 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.286 21:02:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.195 21:02:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:27.195 00:12:27.195 real 0m15.838s 00:12:27.195 user 0m23.372s 00:12:27.195 sys 0m6.572s 00:12:27.195 21:02:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:27.195 21:02:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:27.195 ************************************ 00:12:27.195 END TEST nvmf_nvme_cli 00:12:27.195 ************************************ 00:12:27.195 21:02:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:27.195 21:02:54 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:27.195 21:02:54 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:27.195 21:02:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:27.195 21:02:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:27.195 21:02:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:27.455 ************************************ 00:12:27.455 START TEST nvmf_vfio_user 00:12:27.455 ************************************ 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:27.455 * Looking for test storage... 00:12:27.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.455 21:02:54 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1863505 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1863505' 00:12:27.456 Process pid: 1863505 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1863505 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1863505 ']' 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:27.456 21:02:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:27.456 [2024-07-15 21:02:54.696240] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:12:27.456 [2024-07-15 21:02:54.696293] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.456 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.720 [2024-07-15 21:02:54.766581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.720 [2024-07-15 21:02:54.832428] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.720 [2024-07-15 21:02:54.832469] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.720 [2024-07-15 21:02:54.832477] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.720 [2024-07-15 21:02:54.832483] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.720 [2024-07-15 21:02:54.832489] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.720 [2024-07-15 21:02:54.832629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.720 [2024-07-15 21:02:54.832742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.720 [2024-07-15 21:02:54.832897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.720 [2024-07-15 21:02:54.832898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.299 21:02:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:28.299 21:02:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:28.299 21:02:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:29.237 21:02:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:29.496 21:02:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:29.496 21:02:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:29.496 21:02:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:29.496 21:02:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:29.496 21:02:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:29.756 Malloc1 00:12:29.756 21:02:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:29.756 21:02:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:30.016 21:02:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:30.276 21:02:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:30.276 21:02:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:30.276 21:02:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:30.276 Malloc2 00:12:30.276 21:02:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:30.535 21:02:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:30.795 21:02:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:30.795 21:02:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:30.795 21:02:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:30.795 21:02:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:30.795 21:02:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:30.795 21:02:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:30.795 21:02:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:30.795 [2024-07-15 21:02:58.048297] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:12:30.795 [2024-07-15 21:02:58.048341] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1864192 ] 00:12:30.795 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.795 [2024-07-15 21:02:58.079863] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:31.058 [2024-07-15 21:02:58.085223] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:31.058 [2024-07-15 21:02:58.085246] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd28b57d000 00:12:31.058 [2024-07-15 21:02:58.086223] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:31.058 [2024-07-15 21:02:58.087220] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:31.058 [2024-07-15 21:02:58.088237] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:31.058 [2024-07-15 21:02:58.089234] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:31.058 [2024-07-15 21:02:58.090243] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:31.058 [2024-07-15 21:02:58.091245] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:31.058 [2024-07-15 21:02:58.092250] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:31.058 [2024-07-15 21:02:58.093251] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:31.058 [2024-07-15 21:02:58.094259] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:31.058 [2024-07-15 21:02:58.094267] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd28b572000 00:12:31.058 [2024-07-15 21:02:58.095594] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:31.058 [2024-07-15 21:02:58.116521] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:31.058 [2024-07-15 21:02:58.116552] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:31.058 [2024-07-15 21:02:58.119411] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:31.058 [2024-07-15 21:02:58.119457] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:31.058 [2024-07-15 21:02:58.119546] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:31.058 [2024-07-15 21:02:58.119564] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:31.058 [2024-07-15 21:02:58.119570] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:31.058 [2024-07-15 21:02:58.120408] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:31.058 [2024-07-15 21:02:58.120418] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:31.058 [2024-07-15 21:02:58.120425] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:31.058 [2024-07-15 21:02:58.121418] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:31.058 [2024-07-15 21:02:58.121427] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:31.058 [2024-07-15 21:02:58.121434] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:31.058 [2024-07-15 21:02:58.122424] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:31.058 [2024-07-15 21:02:58.122432] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:31.058 [2024-07-15 21:02:58.123422] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:31.058 [2024-07-15 21:02:58.123430] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:31.058 [2024-07-15 21:02:58.123435] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:31.058 [2024-07-15 21:02:58.123442] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:31.058 [2024-07-15 21:02:58.123547] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:31.058 [2024-07-15 21:02:58.123552] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:31.059 [2024-07-15 21:02:58.123557] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:31.059 [2024-07-15 21:02:58.124430] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:31.059 [2024-07-15 21:02:58.125441] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:31.059 [2024-07-15 21:02:58.126445] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:31.059 [2024-07-15 21:02:58.127443] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:31.059 [2024-07-15 21:02:58.127521] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:31.059 [2024-07-15 21:02:58.128461] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:31.059 [2024-07-15 21:02:58.128468] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:31.059 [2024-07-15 21:02:58.128473] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.128494] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:31.059 [2024-07-15 21:02:58.128501] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.128517] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:31.059 [2024-07-15 21:02:58.128522] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:31.059 [2024-07-15 21:02:58.128535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:31.059 [2024-07-15 21:02:58.128574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:31.059 [2024-07-15 21:02:58.128583] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:31.059 [2024-07-15 21:02:58.128589] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:31.059 [2024-07-15 21:02:58.128594] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:31.059 [2024-07-15 21:02:58.128598] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:31.059 [2024-07-15 21:02:58.128603] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:31.059 [2024-07-15 21:02:58.128608] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:31.059 [2024-07-15 21:02:58.128613] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.128621] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.128631] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:31.059 [2024-07-15 21:02:58.128638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:31.059 [2024-07-15 21:02:58.128651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:31.059 [2024-07-15 21:02:58.128660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:31.059 [2024-07-15 21:02:58.128668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:31.059 [2024-07-15 21:02:58.128676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:31.059 [2024-07-15 21:02:58.128683] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.128693] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.128702] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:31.059 [2024-07-15 21:02:58.128709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:31.059 [2024-07-15 21:02:58.128715] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:31.059 [2024-07-15 21:02:58.128720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.128726] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.128732] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.128741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:31.059 [2024-07-15 21:02:58.128751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:31.059 [2024-07-15 21:02:58.128886] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.128894] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.128902] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:31.059 [2024-07-15 21:02:58.128906] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:31.059 [2024-07-15 21:02:58.128913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:31.059 [2024-07-15 21:02:58.128924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:31.059 [2024-07-15 21:02:58.128934] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:31.059 [2024-07-15 21:02:58.128947] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.128954] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.128961] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:31.059 [2024-07-15 21:02:58.128966] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:31.059 [2024-07-15 21:02:58.128972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:31.059 [2024-07-15 21:02:58.128990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:31.059 [2024-07-15 21:02:58.129002] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.129009] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.129018] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:31.059 [2024-07-15 21:02:58.129023] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:31.059 [2024-07-15 21:02:58.129029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:31.059 [2024-07-15 21:02:58.129038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:31.059 [2024-07-15 21:02:58.129046] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.129052] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.129060] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.129066] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.129071] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.129076] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.129081] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:31.059 [2024-07-15 21:02:58.129085] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:31.059 [2024-07-15 21:02:58.129091] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:31.059 [2024-07-15 21:02:58.129108] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:31.060 [2024-07-15 21:02:58.129118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:31.060 [2024-07-15 21:02:58.129130] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:31.060 [2024-07-15 21:02:58.129139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:31.060 [2024-07-15 21:02:58.129150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:31.060 [2024-07-15 21:02:58.129159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:31.060 [2024-07-15 21:02:58.129170] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:31.060 [2024-07-15 21:02:58.129177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:31.060 [2024-07-15 21:02:58.129190] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:31.060 [2024-07-15 21:02:58.129194] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:31.060 [2024-07-15 21:02:58.129198] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:31.060 [2024-07-15 21:02:58.129201] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:31.060 [2024-07-15 21:02:58.129208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:31.060 [2024-07-15 21:02:58.129215] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:31.060 [2024-07-15 21:02:58.129221] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:31.060 [2024-07-15 21:02:58.129227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:31.060 [2024-07-15 21:02:58.129239] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:31.060 [2024-07-15 21:02:58.129243] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:31.060 [2024-07-15 21:02:58.129249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:31.060 [2024-07-15 21:02:58.129257] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:31.060 [2024-07-15 21:02:58.129261] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:31.060 [2024-07-15 21:02:58.129267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:31.060 [2024-07-15 21:02:58.129274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:31.060 [2024-07-15 21:02:58.129286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:31.060 [2024-07-15 21:02:58.129296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:31.060 [2024-07-15 21:02:58.129303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:31.060 ===================================================== 00:12:31.060 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:31.060 ===================================================== 00:12:31.060 Controller Capabilities/Features 00:12:31.060 ================================ 00:12:31.060 Vendor ID: 4e58 00:12:31.060 Subsystem Vendor ID: 4e58 00:12:31.060 Serial Number: SPDK1 00:12:31.060 Model Number: SPDK bdev Controller 00:12:31.060 Firmware Version: 24.09 00:12:31.060 Recommended Arb Burst: 6 00:12:31.060 IEEE OUI Identifier: 8d 6b 50 00:12:31.060 Multi-path I/O 00:12:31.060 May have multiple subsystem ports: Yes 00:12:31.060 May have multiple controllers: Yes 00:12:31.060 Associated with SR-IOV VF: No 00:12:31.060 Max Data Transfer Size: 131072 00:12:31.060 Max Number of Namespaces: 32 00:12:31.060 Max Number of I/O Queues: 127 00:12:31.060 NVMe Specification Version (VS): 1.3 00:12:31.060 NVMe Specification Version (Identify): 1.3 00:12:31.060 Maximum Queue Entries: 256 00:12:31.060 Contiguous Queues Required: Yes 00:12:31.060 Arbitration Mechanisms Supported 00:12:31.060 Weighted Round Robin: Not Supported 00:12:31.060 Vendor Specific: Not Supported 00:12:31.060 Reset Timeout: 15000 ms 00:12:31.060 Doorbell Stride: 4 bytes 00:12:31.060 NVM Subsystem Reset: Not Supported 00:12:31.060 Command Sets Supported 00:12:31.060 NVM Command Set: Supported 00:12:31.060 Boot Partition: Not Supported 00:12:31.060 Memory Page Size Minimum: 4096 bytes 00:12:31.060 Memory Page Size Maximum: 4096 bytes 00:12:31.060 Persistent Memory Region: Not Supported 00:12:31.060 Optional Asynchronous Events Supported 00:12:31.060 Namespace Attribute Notices: Supported 00:12:31.060 Firmware Activation Notices: Not Supported 00:12:31.060 ANA Change Notices: Not Supported 00:12:31.060 PLE Aggregate Log Change Notices: Not Supported 00:12:31.060 LBA Status Info Alert Notices: Not Supported 00:12:31.060 EGE Aggregate Log Change Notices: Not Supported 00:12:31.060 Normal NVM Subsystem Shutdown event: Not Supported 00:12:31.060 Zone Descriptor Change Notices: Not Supported 00:12:31.060 Discovery Log Change Notices: Not Supported 00:12:31.060 Controller Attributes 00:12:31.060 128-bit Host Identifier: Supported 00:12:31.060 Non-Operational Permissive Mode: Not Supported 00:12:31.060 NVM Sets: Not Supported 00:12:31.060 Read Recovery Levels: Not Supported 00:12:31.060 Endurance Groups: Not Supported 00:12:31.060 Predictable Latency Mode: Not Supported 00:12:31.060 Traffic Based Keep ALive: Not Supported 00:12:31.060 Namespace Granularity: Not Supported 00:12:31.060 SQ Associations: Not Supported 00:12:31.060 UUID List: Not Supported 00:12:31.060 Multi-Domain Subsystem: Not Supported 00:12:31.060 Fixed Capacity Management: Not Supported 00:12:31.060 Variable Capacity Management: Not Supported 00:12:31.060 Delete Endurance Group: Not Supported 00:12:31.060 Delete NVM Set: Not Supported 00:12:31.060 Extended LBA Formats Supported: Not Supported 00:12:31.060 Flexible Data Placement Supported: Not Supported 00:12:31.060 00:12:31.060 Controller Memory Buffer Support 00:12:31.060 ================================ 00:12:31.060 Supported: No 00:12:31.060 00:12:31.060 Persistent Memory Region Support 00:12:31.060 ================================ 00:12:31.060 Supported: No 00:12:31.060 00:12:31.060 Admin Command Set Attributes 00:12:31.060 ============================ 00:12:31.060 Security Send/Receive: Not Supported 00:12:31.060 Format NVM: Not Supported 00:12:31.060 Firmware Activate/Download: Not Supported 00:12:31.060 Namespace Management: Not Supported 00:12:31.060 Device Self-Test: Not Supported 00:12:31.060 Directives: Not Supported 00:12:31.060 NVMe-MI: Not Supported 00:12:31.060 Virtualization Management: Not Supported 00:12:31.060 Doorbell Buffer Config: Not Supported 00:12:31.060 Get LBA Status Capability: Not Supported 00:12:31.060 Command & Feature Lockdown Capability: Not Supported 00:12:31.060 Abort Command Limit: 4 00:12:31.060 Async Event Request Limit: 4 00:12:31.060 Number of Firmware Slots: N/A 00:12:31.060 Firmware Slot 1 Read-Only: N/A 00:12:31.060 Firmware Activation Without Reset: N/A 00:12:31.060 Multiple Update Detection Support: N/A 00:12:31.060 Firmware Update Granularity: No Information Provided 00:12:31.060 Per-Namespace SMART Log: No 00:12:31.060 Asymmetric Namespace Access Log Page: Not Supported 00:12:31.060 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:31.060 Command Effects Log Page: Supported 00:12:31.060 Get Log Page Extended Data: Supported 00:12:31.060 Telemetry Log Pages: Not Supported 00:12:31.060 Persistent Event Log Pages: Not Supported 00:12:31.060 Supported Log Pages Log Page: May Support 00:12:31.061 Commands Supported & Effects Log Page: Not Supported 00:12:31.061 Feature Identifiers & Effects Log Page:May Support 00:12:31.061 NVMe-MI Commands & Effects Log Page: May Support 00:12:31.061 Data Area 4 for Telemetry Log: Not Supported 00:12:31.061 Error Log Page Entries Supported: 128 00:12:31.061 Keep Alive: Supported 00:12:31.061 Keep Alive Granularity: 10000 ms 00:12:31.061 00:12:31.061 NVM Command Set Attributes 00:12:31.061 ========================== 00:12:31.061 Submission Queue Entry Size 00:12:31.061 Max: 64 00:12:31.061 Min: 64 00:12:31.061 Completion Queue Entry Size 00:12:31.061 Max: 16 00:12:31.061 Min: 16 00:12:31.061 Number of Namespaces: 32 00:12:31.061 Compare Command: Supported 00:12:31.061 Write Uncorrectable Command: Not Supported 00:12:31.061 Dataset Management Command: Supported 00:12:31.061 Write Zeroes Command: Supported 00:12:31.061 Set Features Save Field: Not Supported 00:12:31.061 Reservations: Not Supported 00:12:31.061 Timestamp: Not Supported 00:12:31.061 Copy: Supported 00:12:31.061 Volatile Write Cache: Present 00:12:31.061 Atomic Write Unit (Normal): 1 00:12:31.061 Atomic Write Unit (PFail): 1 00:12:31.061 Atomic Compare & Write Unit: 1 00:12:31.061 Fused Compare & Write: Supported 00:12:31.061 Scatter-Gather List 00:12:31.061 SGL Command Set: Supported (Dword aligned) 00:12:31.061 SGL Keyed: Not Supported 00:12:31.061 SGL Bit Bucket Descriptor: Not Supported 00:12:31.061 SGL Metadata Pointer: Not Supported 00:12:31.061 Oversized SGL: Not Supported 00:12:31.061 SGL Metadata Address: Not Supported 00:12:31.061 SGL Offset: Not Supported 00:12:31.061 Transport SGL Data Block: Not Supported 00:12:31.061 Replay Protected Memory Block: Not Supported 00:12:31.061 00:12:31.061 Firmware Slot Information 00:12:31.061 ========================= 00:12:31.061 Active slot: 1 00:12:31.061 Slot 1 Firmware Revision: 24.09 00:12:31.061 00:12:31.061 00:12:31.061 Commands Supported and Effects 00:12:31.061 ============================== 00:12:31.061 Admin Commands 00:12:31.061 -------------- 00:12:31.061 Get Log Page (02h): Supported 00:12:31.061 Identify (06h): Supported 00:12:31.061 Abort (08h): Supported 00:12:31.061 Set Features (09h): Supported 00:12:31.061 Get Features (0Ah): Supported 00:12:31.061 Asynchronous Event Request (0Ch): Supported 00:12:31.061 Keep Alive (18h): Supported 00:12:31.061 I/O Commands 00:12:31.061 ------------ 00:12:31.061 Flush (00h): Supported LBA-Change 00:12:31.061 Write (01h): Supported LBA-Change 00:12:31.061 Read (02h): Supported 00:12:31.061 Compare (05h): Supported 00:12:31.061 Write Zeroes (08h): Supported LBA-Change 00:12:31.061 Dataset Management (09h): Supported LBA-Change 00:12:31.061 Copy (19h): Supported LBA-Change 00:12:31.061 00:12:31.061 Error Log 00:12:31.061 ========= 00:12:31.061 00:12:31.061 Arbitration 00:12:31.061 =========== 00:12:31.061 Arbitration Burst: 1 00:12:31.061 00:12:31.061 Power Management 00:12:31.061 ================ 00:12:31.061 Number of Power States: 1 00:12:31.061 Current Power State: Power State #0 00:12:31.061 Power State #0: 00:12:31.061 Max Power: 0.00 W 00:12:31.061 Non-Operational State: Operational 00:12:31.061 Entry Latency: Not Reported 00:12:31.061 Exit Latency: Not Reported 00:12:31.061 Relative Read Throughput: 0 00:12:31.061 Relative Read Latency: 0 00:12:31.061 Relative Write Throughput: 0 00:12:31.061 Relative Write Latency: 0 00:12:31.061 Idle Power: Not Reported 00:12:31.061 Active Power: Not Reported 00:12:31.061 Non-Operational Permissive Mode: Not Supported 00:12:31.061 00:12:31.061 Health Information 00:12:31.061 ================== 00:12:31.061 Critical Warnings: 00:12:31.061 Available Spare Space: OK 00:12:31.061 Temperature: OK 00:12:31.061 Device Reliability: OK 00:12:31.061 Read Only: No 00:12:31.061 Volatile Memory Backup: OK 00:12:31.061 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:31.061 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:31.061 Available Spare: 0% 00:12:31.061 Available Sp[2024-07-15 21:02:58.129410] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:31.061 [2024-07-15 21:02:58.129418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:31.061 [2024-07-15 21:02:58.129446] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:31.061 [2024-07-15 21:02:58.129456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.061 [2024-07-15 21:02:58.129462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.061 [2024-07-15 21:02:58.129469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.061 [2024-07-15 21:02:58.129475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.061 [2024-07-15 21:02:58.130470] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:31.061 [2024-07-15 21:02:58.130482] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:31.061 [2024-07-15 21:02:58.131472] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:31.061 [2024-07-15 21:02:58.131511] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:31.061 [2024-07-15 21:02:58.131517] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:31.061 [2024-07-15 21:02:58.132485] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:31.061 [2024-07-15 21:02:58.132495] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:31.061 [2024-07-15 21:02:58.132559] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:31.061 [2024-07-15 21:02:58.136237] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:31.061 are Threshold: 0% 00:12:31.061 Life Percentage Used: 0% 00:12:31.061 Data Units Read: 0 00:12:31.061 Data Units Written: 0 00:12:31.061 Host Read Commands: 0 00:12:31.061 Host Write Commands: 0 00:12:31.061 Controller Busy Time: 0 minutes 00:12:31.061 Power Cycles: 0 00:12:31.061 Power On Hours: 0 hours 00:12:31.061 Unsafe Shutdowns: 0 00:12:31.061 Unrecoverable Media Errors: 0 00:12:31.061 Lifetime Error Log Entries: 0 00:12:31.061 Warning Temperature Time: 0 minutes 00:12:31.061 Critical Temperature Time: 0 minutes 00:12:31.061 00:12:31.061 Number of Queues 00:12:31.061 ================ 00:12:31.061 Number of I/O Submission Queues: 127 00:12:31.061 Number of I/O Completion Queues: 127 00:12:31.061 00:12:31.061 Active Namespaces 00:12:31.061 ================= 00:12:31.061 Namespace ID:1 00:12:31.061 Error Recovery Timeout: Unlimited 00:12:31.061 Command Set Identifier: NVM (00h) 00:12:31.061 Deallocate: Supported 00:12:31.061 Deallocated/Unwritten Error: Not Supported 00:12:31.062 Deallocated Read Value: Unknown 00:12:31.062 Deallocate in Write Zeroes: Not Supported 00:12:31.062 Deallocated Guard Field: 0xFFFF 00:12:31.062 Flush: Supported 00:12:31.062 Reservation: Supported 00:12:31.062 Namespace Sharing Capabilities: Multiple Controllers 00:12:31.062 Size (in LBAs): 131072 (0GiB) 00:12:31.062 Capacity (in LBAs): 131072 (0GiB) 00:12:31.062 Utilization (in LBAs): 131072 (0GiB) 00:12:31.062 NGUID: C910EA890A03451595E9886510ACB6D2 00:12:31.062 UUID: c910ea89-0a03-4515-95e9-886510acb6d2 00:12:31.062 Thin Provisioning: Not Supported 00:12:31.062 Per-NS Atomic Units: Yes 00:12:31.062 Atomic Boundary Size (Normal): 0 00:12:31.062 Atomic Boundary Size (PFail): 0 00:12:31.062 Atomic Boundary Offset: 0 00:12:31.062 Maximum Single Source Range Length: 65535 00:12:31.062 Maximum Copy Length: 65535 00:12:31.062 Maximum Source Range Count: 1 00:12:31.062 NGUID/EUI64 Never Reused: No 00:12:31.062 Namespace Write Protected: No 00:12:31.062 Number of LBA Formats: 1 00:12:31.062 Current LBA Format: LBA Format #00 00:12:31.062 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:31.062 00:12:31.062 21:02:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:31.062 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.062 [2024-07-15 21:02:58.319851] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:36.348 Initializing NVMe Controllers 00:12:36.348 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:36.348 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:36.348 Initialization complete. Launching workers. 00:12:36.348 ======================================================== 00:12:36.348 Latency(us) 00:12:36.348 Device Information : IOPS MiB/s Average min max 00:12:36.348 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39920.74 155.94 3206.23 846.22 10784.31 00:12:36.348 ======================================================== 00:12:36.348 Total : 39920.74 155.94 3206.23 846.22 10784.31 00:12:36.348 00:12:36.348 [2024-07-15 21:03:03.341957] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:36.348 21:03:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:36.348 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.348 [2024-07-15 21:03:03.523846] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:41.635 Initializing NVMe Controllers 00:12:41.635 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:41.635 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:41.635 Initialization complete. Launching workers. 00:12:41.635 ======================================================== 00:12:41.635 Latency(us) 00:12:41.635 Device Information : IOPS MiB/s Average min max 00:12:41.635 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15923.20 62.20 8048.10 7465.62 15963.41 00:12:41.635 ======================================================== 00:12:41.635 Total : 15923.20 62.20 8048.10 7465.62 15963.41 00:12:41.635 00:12:41.635 [2024-07-15 21:03:08.563719] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:41.635 21:03:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:41.635 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.635 [2024-07-15 21:03:08.759561] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:46.922 [2024-07-15 21:03:13.843460] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:46.922 Initializing NVMe Controllers 00:12:46.922 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:46.922 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:46.922 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:46.922 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:46.922 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:46.922 Initialization complete. Launching workers. 00:12:46.922 Starting thread on core 2 00:12:46.922 Starting thread on core 3 00:12:46.922 Starting thread on core 1 00:12:46.923 21:03:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:46.923 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.923 [2024-07-15 21:03:14.118659] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:50.224 [2024-07-15 21:03:17.178352] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:50.224 Initializing NVMe Controllers 00:12:50.224 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:50.224 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:50.224 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:50.224 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:50.224 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:50.224 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:50.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:50.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:50.224 Initialization complete. Launching workers. 00:12:50.224 Starting thread on core 1 with urgent priority queue 00:12:50.224 Starting thread on core 2 with urgent priority queue 00:12:50.224 Starting thread on core 3 with urgent priority queue 00:12:50.224 Starting thread on core 0 with urgent priority queue 00:12:50.224 SPDK bdev Controller (SPDK1 ) core 0: 13085.00 IO/s 7.64 secs/100000 ios 00:12:50.224 SPDK bdev Controller (SPDK1 ) core 1: 10039.00 IO/s 9.96 secs/100000 ios 00:12:50.224 SPDK bdev Controller (SPDK1 ) core 2: 9222.33 IO/s 10.84 secs/100000 ios 00:12:50.224 SPDK bdev Controller (SPDK1 ) core 3: 10861.33 IO/s 9.21 secs/100000 ios 00:12:50.224 ======================================================== 00:12:50.224 00:12:50.224 21:03:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:50.224 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.224 [2024-07-15 21:03:17.450663] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:50.224 Initializing NVMe Controllers 00:12:50.224 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:50.224 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:50.224 Namespace ID: 1 size: 0GB 00:12:50.224 Initialization complete. 00:12:50.224 INFO: using host memory buffer for IO 00:12:50.224 Hello world! 00:12:50.224 [2024-07-15 21:03:17.486886] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:50.485 21:03:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:50.485 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.485 [2024-07-15 21:03:17.756674] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:51.868 Initializing NVMe Controllers 00:12:51.868 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:51.868 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:51.868 Initialization complete. Launching workers. 00:12:51.868 submit (in ns) avg, min, max = 7231.8, 3934.2, 4000086.7 00:12:51.868 complete (in ns) avg, min, max = 20900.2, 2385.0, 3998885.8 00:12:51.868 00:12:51.868 Submit histogram 00:12:51.868 ================ 00:12:51.868 Range in us Cumulative Count 00:12:51.868 3.920 - 3.947: 0.0838% ( 16) 00:12:51.868 3.947 - 3.973: 3.7706% ( 704) 00:12:51.868 3.973 - 4.000: 11.8565% ( 1544) 00:12:51.868 4.000 - 4.027: 22.5347% ( 2039) 00:12:51.868 4.027 - 4.053: 33.7732% ( 2146) 00:12:51.868 4.053 - 4.080: 46.3577% ( 2403) 00:12:51.868 4.080 - 4.107: 63.9958% ( 3368) 00:12:51.868 4.107 - 4.133: 78.6541% ( 2799) 00:12:51.868 4.133 - 4.160: 89.3585% ( 2044) 00:12:51.868 4.160 - 4.187: 94.9568% ( 1069) 00:12:51.868 4.187 - 4.213: 97.7795% ( 539) 00:12:51.868 4.213 - 4.240: 98.9107% ( 216) 00:12:51.868 4.240 - 4.267: 99.3349% ( 81) 00:12:51.868 4.267 - 4.293: 99.4711% ( 26) 00:12:51.868 4.293 - 4.320: 99.5130% ( 8) 00:12:51.868 4.320 - 4.347: 99.5287% ( 3) 00:12:51.868 4.453 - 4.480: 99.5339% ( 1) 00:12:51.868 4.480 - 4.507: 99.5391% ( 1) 00:12:51.868 4.693 - 4.720: 99.5444% ( 1) 00:12:51.868 4.773 - 4.800: 99.5549% ( 2) 00:12:51.868 5.067 - 5.093: 99.5601% ( 1) 00:12:51.868 5.093 - 5.120: 99.5653% ( 1) 00:12:51.868 5.147 - 5.173: 99.5706% ( 1) 00:12:51.868 5.360 - 5.387: 99.5758% ( 1) 00:12:51.868 5.493 - 5.520: 99.5810% ( 1) 00:12:51.868 5.520 - 5.547: 99.5863% ( 1) 00:12:51.868 5.760 - 5.787: 99.5915% ( 1) 00:12:51.868 5.840 - 5.867: 99.5968% ( 1) 00:12:51.868 5.867 - 5.893: 99.6020% ( 1) 00:12:51.868 5.893 - 5.920: 99.6072% ( 1) 00:12:51.868 5.973 - 6.000: 99.6125% ( 1) 00:12:51.868 6.053 - 6.080: 99.6282% ( 3) 00:12:51.868 6.080 - 6.107: 99.6334% ( 1) 00:12:51.868 6.133 - 6.160: 99.6439% ( 2) 00:12:51.868 6.160 - 6.187: 99.6491% ( 1) 00:12:51.868 6.187 - 6.213: 99.6544% ( 1) 00:12:51.868 6.240 - 6.267: 99.6596% ( 1) 00:12:51.868 6.347 - 6.373: 99.6701% ( 2) 00:12:51.868 6.373 - 6.400: 99.6753% ( 1) 00:12:51.868 6.427 - 6.453: 99.6858% ( 2) 00:12:51.868 6.453 - 6.480: 99.6910% ( 1) 00:12:51.868 6.480 - 6.507: 99.7067% ( 3) 00:12:51.868 6.507 - 6.533: 99.7120% ( 1) 00:12:51.868 6.533 - 6.560: 99.7172% ( 1) 00:12:51.868 6.560 - 6.587: 99.7224% ( 1) 00:12:51.868 6.640 - 6.667: 99.7329% ( 2) 00:12:51.868 6.667 - 6.693: 99.7382% ( 1) 00:12:51.868 6.693 - 6.720: 99.7539% ( 3) 00:12:51.868 6.720 - 6.747: 99.7591% ( 1) 00:12:51.869 6.747 - 6.773: 99.7696% ( 2) 00:12:51.869 6.773 - 6.800: 99.7800% ( 2) 00:12:51.869 6.827 - 6.880: 99.7905% ( 2) 00:12:51.869 6.880 - 6.933: 99.8115% ( 4) 00:12:51.869 6.933 - 6.987: 99.8219% ( 2) 00:12:51.869 6.987 - 7.040: 99.8272% ( 1) 00:12:51.869 7.040 - 7.093: 99.8481% ( 4) 00:12:51.869 7.200 - 7.253: 99.8534% ( 1) 00:12:51.869 7.253 - 7.307: 99.8638% ( 2) 00:12:51.869 7.413 - 7.467: 99.8743% ( 2) 00:12:51.869 7.520 - 7.573: 99.8795% ( 1) 00:12:51.869 7.733 - 7.787: 99.8848% ( 1) 00:12:51.869 7.947 - 8.000: 99.8953% ( 2) 00:12:51.869 8.533 - 8.587: 99.9005% ( 1) 00:12:51.869 9.013 - 9.067: 99.9057% ( 1) 00:12:51.869 9.120 - 9.173: 99.9110% ( 1) 00:12:51.869 13.600 - 13.653: 99.9162% ( 1) 00:12:51.869 54.613 - 55.040: 99.9214% ( 1) 00:12:51.869 3986.773 - 4014.080: 100.0000% ( 15) 00:12:51.869 00:12:51.869 Complete histogram 00:12:51.869 ================== 00:12:51.869 Range in us Cumulative Count 00:12:51.869 2.373 - 2.387: 0.0105% ( 2) 00:12:51.869 2.387 - 2.400: 0.9793% ( 185) 00:12:51.869 2.400 - 2.413: 1.0736% ( 18) 00:12:51.869 2.413 - 2.427: 1.1993% ( 24) 00:12:51.869 2.427 - 2.440: 1.2673% ( 13) 00:12:51.869 2.440 - 2.453: 24.1948% ( 4378) 00:12:51.869 2.453 - 2.467: 56.1508% ( 6102) 00:12:51.869 2.467 - 2.480: 66.7819% ( 2030) 00:12:51.869 2.480 - 2.493: 76.1299% ( 1785) 00:12:51.869 2.493 - [2024-07-15 21:03:18.777106] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:51.869 2.507: 81.0369% ( 937) 00:12:51.869 2.507 - 2.520: 82.8384% ( 344) 00:12:51.869 2.520 - 2.533: 88.4211% ( 1066) 00:12:51.869 2.533 - 2.547: 93.8204% ( 1031) 00:12:51.869 2.547 - 2.560: 96.6798% ( 546) 00:12:51.869 2.560 - 2.573: 98.2299% ( 296) 00:12:51.869 2.573 - 2.587: 99.0207% ( 151) 00:12:51.869 2.587 - 2.600: 99.2354% ( 41) 00:12:51.869 2.600 - 2.613: 99.2668% ( 6) 00:12:51.869 2.613 - 2.627: 99.2721% ( 1) 00:12:51.869 2.627 - 2.640: 99.2773% ( 1) 00:12:51.869 2.680 - 2.693: 99.2825% ( 1) 00:12:51.869 4.373 - 4.400: 99.2878% ( 1) 00:12:51.869 4.533 - 4.560: 99.2982% ( 2) 00:12:51.869 4.587 - 4.613: 99.3087% ( 2) 00:12:51.869 4.720 - 4.747: 99.3140% ( 1) 00:12:51.869 4.747 - 4.773: 99.3244% ( 2) 00:12:51.869 4.773 - 4.800: 99.3297% ( 1) 00:12:51.869 4.880 - 4.907: 99.3401% ( 2) 00:12:51.869 4.907 - 4.933: 99.3454% ( 1) 00:12:51.869 4.960 - 4.987: 99.3506% ( 1) 00:12:51.869 5.013 - 5.040: 99.3559% ( 1) 00:12:51.869 5.040 - 5.067: 99.3611% ( 1) 00:12:51.869 5.067 - 5.093: 99.3716% ( 2) 00:12:51.869 5.093 - 5.120: 99.3768% ( 1) 00:12:51.869 5.147 - 5.173: 99.3820% ( 1) 00:12:51.869 5.173 - 5.200: 99.3977% ( 3) 00:12:51.869 5.200 - 5.227: 99.4082% ( 2) 00:12:51.869 5.280 - 5.307: 99.4135% ( 1) 00:12:51.869 5.307 - 5.333: 99.4187% ( 1) 00:12:51.869 5.333 - 5.360: 99.4292% ( 2) 00:12:51.869 5.413 - 5.440: 99.4396% ( 2) 00:12:51.869 5.467 - 5.493: 99.4449% ( 1) 00:12:51.869 5.520 - 5.547: 99.4501% ( 1) 00:12:51.869 5.573 - 5.600: 99.4554% ( 1) 00:12:51.869 5.627 - 5.653: 99.4658% ( 2) 00:12:51.869 5.680 - 5.707: 99.4711% ( 1) 00:12:51.869 5.733 - 5.760: 99.4763% ( 1) 00:12:51.869 5.760 - 5.787: 99.4815% ( 1) 00:12:51.869 5.867 - 5.893: 99.4920% ( 2) 00:12:51.869 6.000 - 6.027: 99.5025% ( 2) 00:12:51.869 6.267 - 6.293: 99.5077% ( 1) 00:12:51.869 6.373 - 6.400: 99.5130% ( 1) 00:12:51.869 7.360 - 7.413: 99.5182% ( 1) 00:12:51.869 12.000 - 12.053: 99.5234% ( 1) 00:12:51.869 12.640 - 12.693: 99.5287% ( 1) 00:12:51.869 43.733 - 43.947: 99.5339% ( 1) 00:12:51.869 103.253 - 103.680: 99.5391% ( 1) 00:12:51.869 3986.773 - 4014.080: 100.0000% ( 88) 00:12:51.869 00:12:51.869 21:03:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:51.869 21:03:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:51.869 21:03:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:51.869 21:03:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:51.869 21:03:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:51.869 [ 00:12:51.869 { 00:12:51.869 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:51.869 "subtype": "Discovery", 00:12:51.869 "listen_addresses": [], 00:12:51.869 "allow_any_host": true, 00:12:51.869 "hosts": [] 00:12:51.869 }, 00:12:51.869 { 00:12:51.869 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:51.869 "subtype": "NVMe", 00:12:51.869 "listen_addresses": [ 00:12:51.869 { 00:12:51.869 "trtype": "VFIOUSER", 00:12:51.869 "adrfam": "IPv4", 00:12:51.869 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:51.869 "trsvcid": "0" 00:12:51.869 } 00:12:51.869 ], 00:12:51.869 "allow_any_host": true, 00:12:51.869 "hosts": [], 00:12:51.869 "serial_number": "SPDK1", 00:12:51.869 "model_number": "SPDK bdev Controller", 00:12:51.869 "max_namespaces": 32, 00:12:51.869 "min_cntlid": 1, 00:12:51.869 "max_cntlid": 65519, 00:12:51.869 "namespaces": [ 00:12:51.869 { 00:12:51.869 "nsid": 1, 00:12:51.869 "bdev_name": "Malloc1", 00:12:51.869 "name": "Malloc1", 00:12:51.869 "nguid": "C910EA890A03451595E9886510ACB6D2", 00:12:51.869 "uuid": "c910ea89-0a03-4515-95e9-886510acb6d2" 00:12:51.869 } 00:12:51.869 ] 00:12:51.869 }, 00:12:51.869 { 00:12:51.869 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:51.869 "subtype": "NVMe", 00:12:51.869 "listen_addresses": [ 00:12:51.869 { 00:12:51.869 "trtype": "VFIOUSER", 00:12:51.869 "adrfam": "IPv4", 00:12:51.869 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:51.869 "trsvcid": "0" 00:12:51.869 } 00:12:51.869 ], 00:12:51.869 "allow_any_host": true, 00:12:51.869 "hosts": [], 00:12:51.869 "serial_number": "SPDK2", 00:12:51.869 "model_number": "SPDK bdev Controller", 00:12:51.869 "max_namespaces": 32, 00:12:51.869 "min_cntlid": 1, 00:12:51.869 "max_cntlid": 65519, 00:12:51.869 "namespaces": [ 00:12:51.869 { 00:12:51.869 "nsid": 1, 00:12:51.869 "bdev_name": "Malloc2", 00:12:51.869 "name": "Malloc2", 00:12:51.869 "nguid": "AA20FC6FD63546EF8991BB68140C9366", 00:12:51.869 "uuid": "aa20fc6f-d635-46ef-8991-bb68140c9366" 00:12:51.869 } 00:12:51.869 ] 00:12:51.869 } 00:12:51.869 ] 00:12:51.869 21:03:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:51.869 21:03:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1868774 00:12:51.869 21:03:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:51.869 21:03:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:51.869 21:03:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:51.869 21:03:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:51.869 21:03:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:51.869 21:03:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:51.869 21:03:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:51.869 21:03:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:51.869 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.131 Malloc3 00:12:52.131 [2024-07-15 21:03:19.178009] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:52.131 21:03:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:52.131 [2024-07-15 21:03:19.331976] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:52.131 21:03:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:52.131 Asynchronous Event Request test 00:12:52.131 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:52.131 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:52.131 Registering asynchronous event callbacks... 00:12:52.131 Starting namespace attribute notice tests for all controllers... 00:12:52.131 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:52.131 aer_cb - Changed Namespace 00:12:52.131 Cleaning up... 00:12:52.393 [ 00:12:52.393 { 00:12:52.393 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:52.393 "subtype": "Discovery", 00:12:52.393 "listen_addresses": [], 00:12:52.393 "allow_any_host": true, 00:12:52.393 "hosts": [] 00:12:52.393 }, 00:12:52.393 { 00:12:52.393 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:52.393 "subtype": "NVMe", 00:12:52.393 "listen_addresses": [ 00:12:52.393 { 00:12:52.393 "trtype": "VFIOUSER", 00:12:52.393 "adrfam": "IPv4", 00:12:52.393 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:52.393 "trsvcid": "0" 00:12:52.393 } 00:12:52.393 ], 00:12:52.393 "allow_any_host": true, 00:12:52.393 "hosts": [], 00:12:52.393 "serial_number": "SPDK1", 00:12:52.393 "model_number": "SPDK bdev Controller", 00:12:52.393 "max_namespaces": 32, 00:12:52.393 "min_cntlid": 1, 00:12:52.393 "max_cntlid": 65519, 00:12:52.393 "namespaces": [ 00:12:52.393 { 00:12:52.393 "nsid": 1, 00:12:52.393 "bdev_name": "Malloc1", 00:12:52.393 "name": "Malloc1", 00:12:52.393 "nguid": "C910EA890A03451595E9886510ACB6D2", 00:12:52.393 "uuid": "c910ea89-0a03-4515-95e9-886510acb6d2" 00:12:52.393 }, 00:12:52.393 { 00:12:52.393 "nsid": 2, 00:12:52.393 "bdev_name": "Malloc3", 00:12:52.393 "name": "Malloc3", 00:12:52.393 "nguid": "6585909459AE420DB2FD341CD8950842", 00:12:52.393 "uuid": "65859094-59ae-420d-b2fd-341cd8950842" 00:12:52.393 } 00:12:52.393 ] 00:12:52.393 }, 00:12:52.393 { 00:12:52.393 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:52.393 "subtype": "NVMe", 00:12:52.393 "listen_addresses": [ 00:12:52.393 { 00:12:52.393 "trtype": "VFIOUSER", 00:12:52.393 "adrfam": "IPv4", 00:12:52.393 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:52.393 "trsvcid": "0" 00:12:52.393 } 00:12:52.393 ], 00:12:52.393 "allow_any_host": true, 00:12:52.393 "hosts": [], 00:12:52.393 "serial_number": "SPDK2", 00:12:52.393 "model_number": "SPDK bdev Controller", 00:12:52.393 "max_namespaces": 32, 00:12:52.393 "min_cntlid": 1, 00:12:52.393 "max_cntlid": 65519, 00:12:52.393 "namespaces": [ 00:12:52.393 { 00:12:52.393 "nsid": 1, 00:12:52.393 "bdev_name": "Malloc2", 00:12:52.393 "name": "Malloc2", 00:12:52.393 "nguid": "AA20FC6FD63546EF8991BB68140C9366", 00:12:52.393 "uuid": "aa20fc6f-d635-46ef-8991-bb68140c9366" 00:12:52.393 } 00:12:52.393 ] 00:12:52.393 } 00:12:52.393 ] 00:12:52.393 21:03:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1868774 00:12:52.393 21:03:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:52.393 21:03:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:52.393 21:03:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:52.393 21:03:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:52.393 [2024-07-15 21:03:19.556164] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:12:52.393 [2024-07-15 21:03:19.556228] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1869013 ] 00:12:52.393 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.393 [2024-07-15 21:03:19.588797] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:52.393 [2024-07-15 21:03:19.597457] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:52.393 [2024-07-15 21:03:19.597479] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fed19c34000 00:12:52.393 [2024-07-15 21:03:19.598457] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:52.393 [2024-07-15 21:03:19.599468] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:52.393 [2024-07-15 21:03:19.600468] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:52.393 [2024-07-15 21:03:19.601472] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:52.393 [2024-07-15 21:03:19.602479] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:52.393 [2024-07-15 21:03:19.603484] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:52.393 [2024-07-15 21:03:19.604496] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:52.393 [2024-07-15 21:03:19.605504] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:52.393 [2024-07-15 21:03:19.606512] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:52.393 [2024-07-15 21:03:19.606522] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fed19c29000 00:12:52.393 [2024-07-15 21:03:19.607848] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:52.393 [2024-07-15 21:03:19.624058] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:52.393 [2024-07-15 21:03:19.624083] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:52.393 [2024-07-15 21:03:19.629166] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:52.393 [2024-07-15 21:03:19.629210] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:52.393 [2024-07-15 21:03:19.629294] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:52.393 [2024-07-15 21:03:19.629308] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:52.393 [2024-07-15 21:03:19.629313] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:52.393 [2024-07-15 21:03:19.630173] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:52.393 [2024-07-15 21:03:19.630183] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:52.393 [2024-07-15 21:03:19.630190] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:52.393 [2024-07-15 21:03:19.631178] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:52.393 [2024-07-15 21:03:19.631188] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:52.393 [2024-07-15 21:03:19.631196] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:52.393 [2024-07-15 21:03:19.632186] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:52.393 [2024-07-15 21:03:19.632196] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:52.393 [2024-07-15 21:03:19.633193] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:52.393 [2024-07-15 21:03:19.633202] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:52.393 [2024-07-15 21:03:19.633207] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:52.393 [2024-07-15 21:03:19.633214] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:52.393 [2024-07-15 21:03:19.633323] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:52.393 [2024-07-15 21:03:19.633328] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:52.393 [2024-07-15 21:03:19.633333] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:52.394 [2024-07-15 21:03:19.634204] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:52.394 [2024-07-15 21:03:19.635207] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:52.394 [2024-07-15 21:03:19.636216] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:52.394 [2024-07-15 21:03:19.637212] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:52.394 [2024-07-15 21:03:19.637256] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:52.394 [2024-07-15 21:03:19.638220] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:52.394 [2024-07-15 21:03:19.638228] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:52.394 [2024-07-15 21:03:19.638237] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:52.394 [2024-07-15 21:03:19.638258] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:52.394 [2024-07-15 21:03:19.638270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:52.394 [2024-07-15 21:03:19.638283] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:52.394 [2024-07-15 21:03:19.638288] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:52.394 [2024-07-15 21:03:19.638300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:52.394 [2024-07-15 21:03:19.646238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:52.394 [2024-07-15 21:03:19.646250] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:52.394 [2024-07-15 21:03:19.646258] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:52.394 [2024-07-15 21:03:19.646263] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:52.394 [2024-07-15 21:03:19.646267] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:52.394 [2024-07-15 21:03:19.646272] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:52.394 [2024-07-15 21:03:19.646276] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:52.394 [2024-07-15 21:03:19.646281] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:52.394 [2024-07-15 21:03:19.646289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:52.394 [2024-07-15 21:03:19.646302] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:52.394 [2024-07-15 21:03:19.654236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:52.394 [2024-07-15 21:03:19.654252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.394 [2024-07-15 21:03:19.654261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.394 [2024-07-15 21:03:19.654269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.394 [2024-07-15 21:03:19.654277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.394 [2024-07-15 21:03:19.654282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:52.394 [2024-07-15 21:03:19.654290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:52.394 [2024-07-15 21:03:19.654299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:52.394 [2024-07-15 21:03:19.662235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:52.394 [2024-07-15 21:03:19.662243] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:52.394 [2024-07-15 21:03:19.662248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:52.394 [2024-07-15 21:03:19.662255] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:52.394 [2024-07-15 21:03:19.662260] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:52.394 [2024-07-15 21:03:19.662269] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:52.394 [2024-07-15 21:03:19.670235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:52.394 [2024-07-15 21:03:19.670302] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:52.394 [2024-07-15 21:03:19.670310] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:52.394 [2024-07-15 21:03:19.670318] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:52.394 [2024-07-15 21:03:19.670322] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:52.394 [2024-07-15 21:03:19.670329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:52.394 [2024-07-15 21:03:19.678237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:52.394 [2024-07-15 21:03:19.678248] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:52.394 [2024-07-15 21:03:19.678256] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:52.394 [2024-07-15 21:03:19.678264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:52.394 [2024-07-15 21:03:19.678273] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:52.394 [2024-07-15 21:03:19.678278] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:52.394 [2024-07-15 21:03:19.678284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:52.656 [2024-07-15 21:03:19.686235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:52.656 [2024-07-15 21:03:19.686250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:52.656 [2024-07-15 21:03:19.686257] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:52.656 [2024-07-15 21:03:19.686265] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:52.656 [2024-07-15 21:03:19.686269] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:52.656 [2024-07-15 21:03:19.686275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:52.656 [2024-07-15 21:03:19.694237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:52.656 [2024-07-15 21:03:19.694247] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:52.656 [2024-07-15 21:03:19.694253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:52.656 [2024-07-15 21:03:19.694261] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:52.656 [2024-07-15 21:03:19.694267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:52.656 [2024-07-15 21:03:19.694272] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:52.656 [2024-07-15 21:03:19.694277] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:52.656 [2024-07-15 21:03:19.694282] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:52.656 [2024-07-15 21:03:19.694286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:52.656 [2024-07-15 21:03:19.694291] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:52.656 [2024-07-15 21:03:19.694308] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:52.656 [2024-07-15 21:03:19.702236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:52.656 [2024-07-15 21:03:19.702250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:52.656 [2024-07-15 21:03:19.710235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:52.656 [2024-07-15 21:03:19.710248] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:52.656 [2024-07-15 21:03:19.718237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:52.656 [2024-07-15 21:03:19.718254] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:52.656 [2024-07-15 21:03:19.726237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:52.656 [2024-07-15 21:03:19.726256] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:52.656 [2024-07-15 21:03:19.726261] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:52.656 [2024-07-15 21:03:19.726264] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:52.656 [2024-07-15 21:03:19.726268] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:52.656 [2024-07-15 21:03:19.726275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:52.656 [2024-07-15 21:03:19.726282] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:52.656 [2024-07-15 21:03:19.726287] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:52.656 [2024-07-15 21:03:19.726293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:52.656 [2024-07-15 21:03:19.726300] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:52.656 [2024-07-15 21:03:19.726304] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:52.656 [2024-07-15 21:03:19.726310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:52.656 [2024-07-15 21:03:19.726318] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:52.656 [2024-07-15 21:03:19.726323] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:52.656 [2024-07-15 21:03:19.726329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:52.656 [2024-07-15 21:03:19.734236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:52.656 [2024-07-15 21:03:19.734250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:52.656 [2024-07-15 21:03:19.734261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:52.657 [2024-07-15 21:03:19.734268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:52.657 ===================================================== 00:12:52.657 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:52.657 ===================================================== 00:12:52.657 Controller Capabilities/Features 00:12:52.657 ================================ 00:12:52.657 Vendor ID: 4e58 00:12:52.657 Subsystem Vendor ID: 4e58 00:12:52.657 Serial Number: SPDK2 00:12:52.657 Model Number: SPDK bdev Controller 00:12:52.657 Firmware Version: 24.09 00:12:52.657 Recommended Arb Burst: 6 00:12:52.657 IEEE OUI Identifier: 8d 6b 50 00:12:52.657 Multi-path I/O 00:12:52.657 May have multiple subsystem ports: Yes 00:12:52.657 May have multiple controllers: Yes 00:12:52.657 Associated with SR-IOV VF: No 00:12:52.657 Max Data Transfer Size: 131072 00:12:52.657 Max Number of Namespaces: 32 00:12:52.657 Max Number of I/O Queues: 127 00:12:52.657 NVMe Specification Version (VS): 1.3 00:12:52.657 NVMe Specification Version (Identify): 1.3 00:12:52.657 Maximum Queue Entries: 256 00:12:52.657 Contiguous Queues Required: Yes 00:12:52.657 Arbitration Mechanisms Supported 00:12:52.657 Weighted Round Robin: Not Supported 00:12:52.657 Vendor Specific: Not Supported 00:12:52.657 Reset Timeout: 15000 ms 00:12:52.657 Doorbell Stride: 4 bytes 00:12:52.657 NVM Subsystem Reset: Not Supported 00:12:52.657 Command Sets Supported 00:12:52.657 NVM Command Set: Supported 00:12:52.657 Boot Partition: Not Supported 00:12:52.657 Memory Page Size Minimum: 4096 bytes 00:12:52.657 Memory Page Size Maximum: 4096 bytes 00:12:52.657 Persistent Memory Region: Not Supported 00:12:52.657 Optional Asynchronous Events Supported 00:12:52.657 Namespace Attribute Notices: Supported 00:12:52.657 Firmware Activation Notices: Not Supported 00:12:52.657 ANA Change Notices: Not Supported 00:12:52.657 PLE Aggregate Log Change Notices: Not Supported 00:12:52.657 LBA Status Info Alert Notices: Not Supported 00:12:52.657 EGE Aggregate Log Change Notices: Not Supported 00:12:52.657 Normal NVM Subsystem Shutdown event: Not Supported 00:12:52.657 Zone Descriptor Change Notices: Not Supported 00:12:52.657 Discovery Log Change Notices: Not Supported 00:12:52.657 Controller Attributes 00:12:52.657 128-bit Host Identifier: Supported 00:12:52.657 Non-Operational Permissive Mode: Not Supported 00:12:52.657 NVM Sets: Not Supported 00:12:52.657 Read Recovery Levels: Not Supported 00:12:52.657 Endurance Groups: Not Supported 00:12:52.657 Predictable Latency Mode: Not Supported 00:12:52.657 Traffic Based Keep ALive: Not Supported 00:12:52.657 Namespace Granularity: Not Supported 00:12:52.657 SQ Associations: Not Supported 00:12:52.657 UUID List: Not Supported 00:12:52.657 Multi-Domain Subsystem: Not Supported 00:12:52.657 Fixed Capacity Management: Not Supported 00:12:52.657 Variable Capacity Management: Not Supported 00:12:52.657 Delete Endurance Group: Not Supported 00:12:52.657 Delete NVM Set: Not Supported 00:12:52.657 Extended LBA Formats Supported: Not Supported 00:12:52.657 Flexible Data Placement Supported: Not Supported 00:12:52.657 00:12:52.657 Controller Memory Buffer Support 00:12:52.657 ================================ 00:12:52.657 Supported: No 00:12:52.657 00:12:52.657 Persistent Memory Region Support 00:12:52.657 ================================ 00:12:52.657 Supported: No 00:12:52.657 00:12:52.657 Admin Command Set Attributes 00:12:52.657 ============================ 00:12:52.657 Security Send/Receive: Not Supported 00:12:52.657 Format NVM: Not Supported 00:12:52.657 Firmware Activate/Download: Not Supported 00:12:52.657 Namespace Management: Not Supported 00:12:52.657 Device Self-Test: Not Supported 00:12:52.657 Directives: Not Supported 00:12:52.657 NVMe-MI: Not Supported 00:12:52.657 Virtualization Management: Not Supported 00:12:52.657 Doorbell Buffer Config: Not Supported 00:12:52.657 Get LBA Status Capability: Not Supported 00:12:52.657 Command & Feature Lockdown Capability: Not Supported 00:12:52.657 Abort Command Limit: 4 00:12:52.657 Async Event Request Limit: 4 00:12:52.657 Number of Firmware Slots: N/A 00:12:52.657 Firmware Slot 1 Read-Only: N/A 00:12:52.657 Firmware Activation Without Reset: N/A 00:12:52.657 Multiple Update Detection Support: N/A 00:12:52.657 Firmware Update Granularity: No Information Provided 00:12:52.657 Per-Namespace SMART Log: No 00:12:52.657 Asymmetric Namespace Access Log Page: Not Supported 00:12:52.657 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:52.657 Command Effects Log Page: Supported 00:12:52.657 Get Log Page Extended Data: Supported 00:12:52.657 Telemetry Log Pages: Not Supported 00:12:52.657 Persistent Event Log Pages: Not Supported 00:12:52.657 Supported Log Pages Log Page: May Support 00:12:52.657 Commands Supported & Effects Log Page: Not Supported 00:12:52.657 Feature Identifiers & Effects Log Page:May Support 00:12:52.657 NVMe-MI Commands & Effects Log Page: May Support 00:12:52.657 Data Area 4 for Telemetry Log: Not Supported 00:12:52.657 Error Log Page Entries Supported: 128 00:12:52.657 Keep Alive: Supported 00:12:52.657 Keep Alive Granularity: 10000 ms 00:12:52.657 00:12:52.657 NVM Command Set Attributes 00:12:52.657 ========================== 00:12:52.657 Submission Queue Entry Size 00:12:52.657 Max: 64 00:12:52.657 Min: 64 00:12:52.657 Completion Queue Entry Size 00:12:52.657 Max: 16 00:12:52.657 Min: 16 00:12:52.657 Number of Namespaces: 32 00:12:52.657 Compare Command: Supported 00:12:52.657 Write Uncorrectable Command: Not Supported 00:12:52.657 Dataset Management Command: Supported 00:12:52.657 Write Zeroes Command: Supported 00:12:52.657 Set Features Save Field: Not Supported 00:12:52.657 Reservations: Not Supported 00:12:52.657 Timestamp: Not Supported 00:12:52.657 Copy: Supported 00:12:52.657 Volatile Write Cache: Present 00:12:52.657 Atomic Write Unit (Normal): 1 00:12:52.657 Atomic Write Unit (PFail): 1 00:12:52.657 Atomic Compare & Write Unit: 1 00:12:52.657 Fused Compare & Write: Supported 00:12:52.657 Scatter-Gather List 00:12:52.657 SGL Command Set: Supported (Dword aligned) 00:12:52.657 SGL Keyed: Not Supported 00:12:52.657 SGL Bit Bucket Descriptor: Not Supported 00:12:52.657 SGL Metadata Pointer: Not Supported 00:12:52.657 Oversized SGL: Not Supported 00:12:52.657 SGL Metadata Address: Not Supported 00:12:52.657 SGL Offset: Not Supported 00:12:52.657 Transport SGL Data Block: Not Supported 00:12:52.657 Replay Protected Memory Block: Not Supported 00:12:52.657 00:12:52.657 Firmware Slot Information 00:12:52.657 ========================= 00:12:52.657 Active slot: 1 00:12:52.657 Slot 1 Firmware Revision: 24.09 00:12:52.657 00:12:52.657 00:12:52.657 Commands Supported and Effects 00:12:52.657 ============================== 00:12:52.657 Admin Commands 00:12:52.657 -------------- 00:12:52.657 Get Log Page (02h): Supported 00:12:52.657 Identify (06h): Supported 00:12:52.657 Abort (08h): Supported 00:12:52.657 Set Features (09h): Supported 00:12:52.657 Get Features (0Ah): Supported 00:12:52.657 Asynchronous Event Request (0Ch): Supported 00:12:52.657 Keep Alive (18h): Supported 00:12:52.657 I/O Commands 00:12:52.657 ------------ 00:12:52.657 Flush (00h): Supported LBA-Change 00:12:52.657 Write (01h): Supported LBA-Change 00:12:52.657 Read (02h): Supported 00:12:52.657 Compare (05h): Supported 00:12:52.657 Write Zeroes (08h): Supported LBA-Change 00:12:52.657 Dataset Management (09h): Supported LBA-Change 00:12:52.657 Copy (19h): Supported LBA-Change 00:12:52.657 00:12:52.657 Error Log 00:12:52.657 ========= 00:12:52.657 00:12:52.657 Arbitration 00:12:52.657 =========== 00:12:52.657 Arbitration Burst: 1 00:12:52.657 00:12:52.657 Power Management 00:12:52.657 ================ 00:12:52.657 Number of Power States: 1 00:12:52.657 Current Power State: Power State #0 00:12:52.657 Power State #0: 00:12:52.657 Max Power: 0.00 W 00:12:52.657 Non-Operational State: Operational 00:12:52.657 Entry Latency: Not Reported 00:12:52.657 Exit Latency: Not Reported 00:12:52.657 Relative Read Throughput: 0 00:12:52.657 Relative Read Latency: 0 00:12:52.657 Relative Write Throughput: 0 00:12:52.657 Relative Write Latency: 0 00:12:52.657 Idle Power: Not Reported 00:12:52.657 Active Power: Not Reported 00:12:52.657 Non-Operational Permissive Mode: Not Supported 00:12:52.657 00:12:52.657 Health Information 00:12:52.657 ================== 00:12:52.657 Critical Warnings: 00:12:52.657 Available Spare Space: OK 00:12:52.657 Temperature: OK 00:12:52.657 Device Reliability: OK 00:12:52.657 Read Only: No 00:12:52.657 Volatile Memory Backup: OK 00:12:52.657 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:52.657 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:52.657 Available Spare: 0% 00:12:52.658 Available Sp[2024-07-15 21:03:19.734367] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:52.658 [2024-07-15 21:03:19.742235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:52.658 [2024-07-15 21:03:19.742266] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:52.658 [2024-07-15 21:03:19.742275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.658 [2024-07-15 21:03:19.742282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.658 [2024-07-15 21:03:19.742288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.658 [2024-07-15 21:03:19.742294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.658 [2024-07-15 21:03:19.742349] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:52.658 [2024-07-15 21:03:19.742362] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:52.658 [2024-07-15 21:03:19.743350] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:52.658 [2024-07-15 21:03:19.743400] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:52.658 [2024-07-15 21:03:19.743406] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:52.658 [2024-07-15 21:03:19.744348] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:52.658 [2024-07-15 21:03:19.744360] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:52.658 [2024-07-15 21:03:19.744410] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:52.658 [2024-07-15 21:03:19.745785] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:52.658 are Threshold: 0% 00:12:52.658 Life Percentage Used: 0% 00:12:52.658 Data Units Read: 0 00:12:52.658 Data Units Written: 0 00:12:52.658 Host Read Commands: 0 00:12:52.658 Host Write Commands: 0 00:12:52.658 Controller Busy Time: 0 minutes 00:12:52.658 Power Cycles: 0 00:12:52.658 Power On Hours: 0 hours 00:12:52.658 Unsafe Shutdowns: 0 00:12:52.658 Unrecoverable Media Errors: 0 00:12:52.658 Lifetime Error Log Entries: 0 00:12:52.658 Warning Temperature Time: 0 minutes 00:12:52.658 Critical Temperature Time: 0 minutes 00:12:52.658 00:12:52.658 Number of Queues 00:12:52.658 ================ 00:12:52.658 Number of I/O Submission Queues: 127 00:12:52.658 Number of I/O Completion Queues: 127 00:12:52.658 00:12:52.658 Active Namespaces 00:12:52.658 ================= 00:12:52.658 Namespace ID:1 00:12:52.658 Error Recovery Timeout: Unlimited 00:12:52.658 Command Set Identifier: NVM (00h) 00:12:52.658 Deallocate: Supported 00:12:52.658 Deallocated/Unwritten Error: Not Supported 00:12:52.658 Deallocated Read Value: Unknown 00:12:52.658 Deallocate in Write Zeroes: Not Supported 00:12:52.658 Deallocated Guard Field: 0xFFFF 00:12:52.658 Flush: Supported 00:12:52.658 Reservation: Supported 00:12:52.658 Namespace Sharing Capabilities: Multiple Controllers 00:12:52.658 Size (in LBAs): 131072 (0GiB) 00:12:52.658 Capacity (in LBAs): 131072 (0GiB) 00:12:52.658 Utilization (in LBAs): 131072 (0GiB) 00:12:52.658 NGUID: AA20FC6FD63546EF8991BB68140C9366 00:12:52.658 UUID: aa20fc6f-d635-46ef-8991-bb68140c9366 00:12:52.658 Thin Provisioning: Not Supported 00:12:52.658 Per-NS Atomic Units: Yes 00:12:52.658 Atomic Boundary Size (Normal): 0 00:12:52.658 Atomic Boundary Size (PFail): 0 00:12:52.658 Atomic Boundary Offset: 0 00:12:52.658 Maximum Single Source Range Length: 65535 00:12:52.658 Maximum Copy Length: 65535 00:12:52.658 Maximum Source Range Count: 1 00:12:52.658 NGUID/EUI64 Never Reused: No 00:12:52.658 Namespace Write Protected: No 00:12:52.658 Number of LBA Formats: 1 00:12:52.658 Current LBA Format: LBA Format #00 00:12:52.658 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:52.658 00:12:52.658 21:03:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:52.658 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.658 [2024-07-15 21:03:19.930259] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:57.940 Initializing NVMe Controllers 00:12:57.940 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:57.940 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:57.940 Initialization complete. Launching workers. 00:12:57.940 ======================================================== 00:12:57.941 Latency(us) 00:12:57.941 Device Information : IOPS MiB/s Average min max 00:12:57.941 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40040.08 156.41 3196.67 842.79 8300.92 00:12:57.941 ======================================================== 00:12:57.941 Total : 40040.08 156.41 3196.67 842.79 8300.92 00:12:57.941 00:12:57.941 [2024-07-15 21:03:25.030419] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:57.941 21:03:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:57.941 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.941 [2024-07-15 21:03:25.212968] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:03.224 Initializing NVMe Controllers 00:13:03.224 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:03.224 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:03.224 Initialization complete. Launching workers. 00:13:03.224 ======================================================== 00:13:03.224 Latency(us) 00:13:03.224 Device Information : IOPS MiB/s Average min max 00:13:03.224 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35455.00 138.50 3611.49 1108.00 7615.50 00:13:03.224 ======================================================== 00:13:03.224 Total : 35455.00 138.50 3611.49 1108.00 7615.50 00:13:03.224 00:13:03.224 [2024-07-15 21:03:30.237439] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:03.224 21:03:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:03.224 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.224 [2024-07-15 21:03:30.433555] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:08.610 [2024-07-15 21:03:35.565318] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:08.610 Initializing NVMe Controllers 00:13:08.610 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:08.610 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:08.610 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:08.610 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:08.610 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:08.610 Initialization complete. Launching workers. 00:13:08.610 Starting thread on core 2 00:13:08.610 Starting thread on core 3 00:13:08.610 Starting thread on core 1 00:13:08.610 21:03:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:08.610 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.610 [2024-07-15 21:03:35.831126] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:11.910 [2024-07-15 21:03:38.886281] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:11.910 Initializing NVMe Controllers 00:13:11.910 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:11.910 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:11.910 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:11.910 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:11.910 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:11.910 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:11.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:11.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:11.910 Initialization complete. Launching workers. 00:13:11.910 Starting thread on core 1 with urgent priority queue 00:13:11.910 Starting thread on core 2 with urgent priority queue 00:13:11.910 Starting thread on core 3 with urgent priority queue 00:13:11.910 Starting thread on core 0 with urgent priority queue 00:13:11.910 SPDK bdev Controller (SPDK2 ) core 0: 9087.33 IO/s 11.00 secs/100000 ios 00:13:11.910 SPDK bdev Controller (SPDK2 ) core 1: 6984.33 IO/s 14.32 secs/100000 ios 00:13:11.910 SPDK bdev Controller (SPDK2 ) core 2: 11023.67 IO/s 9.07 secs/100000 ios 00:13:11.910 SPDK bdev Controller (SPDK2 ) core 3: 9946.33 IO/s 10.05 secs/100000 ios 00:13:11.910 ======================================================== 00:13:11.910 00:13:11.910 21:03:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:11.910 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.910 [2024-07-15 21:03:39.158174] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:11.910 Initializing NVMe Controllers 00:13:11.910 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:11.910 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:11.910 Namespace ID: 1 size: 0GB 00:13:11.910 Initialization complete. 00:13:11.910 INFO: using host memory buffer for IO 00:13:11.910 Hello world! 00:13:11.910 [2024-07-15 21:03:39.167228] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:12.170 21:03:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:12.170 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.170 [2024-07-15 21:03:39.439389] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:13.557 Initializing NVMe Controllers 00:13:13.557 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:13.557 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:13.557 Initialization complete. Launching workers. 00:13:13.557 submit (in ns) avg, min, max = 8504.1, 3917.5, 4000012.5 00:13:13.557 complete (in ns) avg, min, max = 18653.5, 2395.8, 6990005.8 00:13:13.557 00:13:13.557 Submit histogram 00:13:13.557 ================ 00:13:13.557 Range in us Cumulative Count 00:13:13.557 3.893 - 3.920: 0.0053% ( 1) 00:13:13.557 3.920 - 3.947: 1.1279% ( 213) 00:13:13.557 3.947 - 3.973: 6.5405% ( 1027) 00:13:13.557 3.973 - 4.000: 14.8624% ( 1579) 00:13:13.557 4.000 - 4.027: 24.4440% ( 1818) 00:13:13.557 4.027 - 4.053: 36.9400% ( 2371) 00:13:13.557 4.053 - 4.080: 50.9065% ( 2650) 00:13:13.557 4.080 - 4.107: 70.3647% ( 3692) 00:13:13.557 4.107 - 4.133: 83.9201% ( 2572) 00:13:13.557 4.133 - 4.160: 92.5740% ( 1642) 00:13:13.557 4.160 - 4.187: 97.0591% ( 851) 00:13:13.557 4.187 - 4.213: 98.6350% ( 299) 00:13:13.557 4.213 - 4.240: 99.1357% ( 95) 00:13:13.557 4.240 - 4.267: 99.3781% ( 46) 00:13:13.557 4.267 - 4.293: 99.4413% ( 12) 00:13:13.557 4.293 - 4.320: 99.4519% ( 2) 00:13:13.557 4.320 - 4.347: 99.4624% ( 2) 00:13:13.557 4.373 - 4.400: 99.4730% ( 2) 00:13:13.557 4.400 - 4.427: 99.4782% ( 1) 00:13:13.557 4.480 - 4.507: 99.4888% ( 2) 00:13:13.557 4.533 - 4.560: 99.4993% ( 2) 00:13:13.557 4.640 - 4.667: 99.5046% ( 1) 00:13:13.557 4.720 - 4.747: 99.5099% ( 1) 00:13:13.557 4.773 - 4.800: 99.5151% ( 1) 00:13:13.557 4.800 - 4.827: 99.5204% ( 1) 00:13:13.557 4.907 - 4.933: 99.5257% ( 1) 00:13:13.557 4.933 - 4.960: 99.5309% ( 1) 00:13:13.557 5.120 - 5.147: 99.5362% ( 1) 00:13:13.557 5.173 - 5.200: 99.5415% ( 1) 00:13:13.557 5.333 - 5.360: 99.5467% ( 1) 00:13:13.557 5.387 - 5.413: 99.5520% ( 1) 00:13:13.557 5.627 - 5.653: 99.5626% ( 2) 00:13:13.557 5.733 - 5.760: 99.5678% ( 1) 00:13:13.557 6.027 - 6.053: 99.5731% ( 1) 00:13:13.557 6.053 - 6.080: 99.5784% ( 1) 00:13:13.557 6.187 - 6.213: 99.5836% ( 1) 00:13:13.557 6.213 - 6.240: 99.5889% ( 1) 00:13:13.557 6.293 - 6.320: 99.5942% ( 1) 00:13:13.557 6.320 - 6.347: 99.5995% ( 1) 00:13:13.557 6.400 - 6.427: 99.6100% ( 2) 00:13:13.557 6.480 - 6.507: 99.6153% ( 1) 00:13:13.557 6.560 - 6.587: 99.6205% ( 1) 00:13:13.557 6.587 - 6.613: 99.6258% ( 1) 00:13:13.557 6.613 - 6.640: 99.6311% ( 1) 00:13:13.557 6.640 - 6.667: 99.6363% ( 1) 00:13:13.557 6.693 - 6.720: 99.6469% ( 2) 00:13:13.557 6.773 - 6.800: 99.6627% ( 3) 00:13:13.557 6.800 - 6.827: 99.6732% ( 2) 00:13:13.557 6.827 - 6.880: 99.6890% ( 3) 00:13:13.557 6.880 - 6.933: 99.6943% ( 1) 00:13:13.557 6.933 - 6.987: 99.7259% ( 6) 00:13:13.557 6.987 - 7.040: 99.7365% ( 2) 00:13:13.557 7.040 - 7.093: 99.7523% ( 3) 00:13:13.557 7.093 - 7.147: 99.7681% ( 3) 00:13:13.557 7.200 - 7.253: 99.7734% ( 1) 00:13:13.558 7.253 - 7.307: 99.7786% ( 1) 00:13:13.558 7.307 - 7.360: 99.7892% ( 2) 00:13:13.558 7.360 - 7.413: 99.7997% ( 2) 00:13:13.558 7.413 - 7.467: 99.8050% ( 1) 00:13:13.558 7.467 - 7.520: 99.8155% ( 2) 00:13:13.558 7.520 - 7.573: 99.8208% ( 1) 00:13:13.558 7.680 - 7.733: 99.8261% ( 1) 00:13:13.558 7.893 - 7.947: 99.8366% ( 2) 00:13:13.558 7.947 - 8.000: 99.8419% ( 1) 00:13:13.558 8.000 - 8.053: 99.8472% ( 1) 00:13:13.558 8.053 - 8.107: 99.8524% ( 1) 00:13:13.558 8.160 - 8.213: 99.8577% ( 1) 00:13:13.558 8.320 - 8.373: 99.8630% ( 1) 00:13:13.558 8.427 - 8.480: 99.8735% ( 2) 00:13:13.558 8.640 - 8.693: 99.8788% ( 1) 00:13:13.558 11.253 - 11.307: 99.8841% ( 1) 00:13:13.558 12.960 - 13.013: 99.8893% ( 1) 00:13:13.558 3986.773 - 4014.080: 100.0000% ( 21) 00:13:13.558 00:13:13.558 Complete histogram 00:13:13.558 ================== 00:13:13.558 Range in us Cumulative Count 00:13:13.558 2.387 - 2.400: 0.0264% ( 5) 00:13:13.558 2.400 - 2.413: 0.2951% ( 51) 00:13:13.558 2.413 - 2.427: 0.5692% ( 52) 00:13:13.558 2.427 - 2.440: 1.1279% ( 106) 00:13:13.558 2.440 - [2024-07-15 21:03:40.537919] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:13.558 2.453: 1.1858% ( 11) 00:13:13.558 2.453 - 2.467: 30.5154% ( 5565) 00:13:13.558 2.467 - 2.480: 62.7543% ( 6117) 00:13:13.558 2.480 - 2.493: 72.6257% ( 1873) 00:13:13.558 2.493 - 2.507: 80.0833% ( 1415) 00:13:13.558 2.507 - 2.520: 83.3298% ( 616) 00:13:13.558 2.520 - 2.533: 85.6699% ( 444) 00:13:13.558 2.533 - 2.547: 90.2129% ( 862) 00:13:13.558 2.547 - 2.560: 95.2988% ( 965) 00:13:13.558 2.560 - 2.573: 97.5545% ( 428) 00:13:13.558 2.573 - 2.587: 98.6561% ( 209) 00:13:13.558 2.587 - 2.600: 99.1304% ( 90) 00:13:13.558 2.600 - 2.613: 99.3043% ( 33) 00:13:13.558 2.613 - 2.627: 99.3465% ( 8) 00:13:13.558 2.627 - 2.640: 99.3676% ( 4) 00:13:13.558 2.640 - 2.653: 99.3728% ( 1) 00:13:13.558 2.680 - 2.693: 99.3781% ( 1) 00:13:13.558 2.840 - 2.853: 99.3834% ( 1) 00:13:13.558 2.907 - 2.920: 99.3886% ( 1) 00:13:13.558 4.480 - 4.507: 99.3939% ( 1) 00:13:13.558 4.640 - 4.667: 99.3992% ( 1) 00:13:13.558 4.773 - 4.800: 99.4044% ( 1) 00:13:13.558 4.800 - 4.827: 99.4150% ( 2) 00:13:13.558 4.827 - 4.853: 99.4203% ( 1) 00:13:13.558 4.853 - 4.880: 99.4255% ( 1) 00:13:13.558 4.933 - 4.960: 99.4308% ( 1) 00:13:13.558 4.960 - 4.987: 99.4361% ( 1) 00:13:13.558 4.987 - 5.013: 99.4466% ( 2) 00:13:13.558 5.067 - 5.093: 99.4519% ( 1) 00:13:13.558 5.093 - 5.120: 99.4572% ( 1) 00:13:13.558 5.200 - 5.227: 99.4624% ( 1) 00:13:13.558 5.227 - 5.253: 99.4677% ( 1) 00:13:13.558 5.253 - 5.280: 99.4782% ( 2) 00:13:13.558 5.333 - 5.360: 99.4835% ( 1) 00:13:13.558 5.387 - 5.413: 99.4888% ( 1) 00:13:13.558 5.413 - 5.440: 99.4940% ( 1) 00:13:13.558 5.493 - 5.520: 99.4993% ( 1) 00:13:13.558 5.547 - 5.573: 99.5046% ( 1) 00:13:13.558 5.573 - 5.600: 99.5151% ( 2) 00:13:13.558 5.627 - 5.653: 99.5309% ( 3) 00:13:13.558 6.053 - 6.080: 99.5362% ( 1) 00:13:13.558 6.133 - 6.160: 99.5415% ( 1) 00:13:13.558 6.267 - 6.293: 99.5467% ( 1) 00:13:13.558 6.400 - 6.427: 99.5520% ( 1) 00:13:13.558 6.453 - 6.480: 99.5573% ( 1) 00:13:13.558 6.533 - 6.560: 99.5626% ( 1) 00:13:13.558 6.720 - 6.747: 99.5678% ( 1) 00:13:13.558 7.200 - 7.253: 99.5731% ( 1) 00:13:13.558 7.787 - 7.840: 99.5784% ( 1) 00:13:13.558 9.707 - 9.760: 99.5836% ( 1) 00:13:13.558 12.213 - 12.267: 99.5889% ( 1) 00:13:13.558 12.373 - 12.427: 99.5942% ( 1) 00:13:13.558 16.213 - 16.320: 99.5995% ( 1) 00:13:13.558 3986.773 - 4014.080: 99.9947% ( 75) 00:13:13.558 6963.200 - 6990.507: 100.0000% ( 1) 00:13:13.558 00:13:13.558 21:03:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:13.558 21:03:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:13.558 21:03:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:13.558 21:03:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:13.558 21:03:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:13.558 [ 00:13:13.558 { 00:13:13.558 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:13.558 "subtype": "Discovery", 00:13:13.558 "listen_addresses": [], 00:13:13.558 "allow_any_host": true, 00:13:13.558 "hosts": [] 00:13:13.558 }, 00:13:13.558 { 00:13:13.558 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:13.558 "subtype": "NVMe", 00:13:13.558 "listen_addresses": [ 00:13:13.558 { 00:13:13.558 "trtype": "VFIOUSER", 00:13:13.558 "adrfam": "IPv4", 00:13:13.558 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:13.558 "trsvcid": "0" 00:13:13.558 } 00:13:13.558 ], 00:13:13.558 "allow_any_host": true, 00:13:13.558 "hosts": [], 00:13:13.558 "serial_number": "SPDK1", 00:13:13.558 "model_number": "SPDK bdev Controller", 00:13:13.558 "max_namespaces": 32, 00:13:13.558 "min_cntlid": 1, 00:13:13.558 "max_cntlid": 65519, 00:13:13.558 "namespaces": [ 00:13:13.558 { 00:13:13.558 "nsid": 1, 00:13:13.558 "bdev_name": "Malloc1", 00:13:13.558 "name": "Malloc1", 00:13:13.558 "nguid": "C910EA890A03451595E9886510ACB6D2", 00:13:13.558 "uuid": "c910ea89-0a03-4515-95e9-886510acb6d2" 00:13:13.558 }, 00:13:13.558 { 00:13:13.558 "nsid": 2, 00:13:13.558 "bdev_name": "Malloc3", 00:13:13.558 "name": "Malloc3", 00:13:13.558 "nguid": "6585909459AE420DB2FD341CD8950842", 00:13:13.558 "uuid": "65859094-59ae-420d-b2fd-341cd8950842" 00:13:13.558 } 00:13:13.558 ] 00:13:13.558 }, 00:13:13.558 { 00:13:13.558 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:13.558 "subtype": "NVMe", 00:13:13.558 "listen_addresses": [ 00:13:13.558 { 00:13:13.558 "trtype": "VFIOUSER", 00:13:13.558 "adrfam": "IPv4", 00:13:13.558 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:13.558 "trsvcid": "0" 00:13:13.558 } 00:13:13.558 ], 00:13:13.558 "allow_any_host": true, 00:13:13.558 "hosts": [], 00:13:13.558 "serial_number": "SPDK2", 00:13:13.558 "model_number": "SPDK bdev Controller", 00:13:13.558 "max_namespaces": 32, 00:13:13.558 "min_cntlid": 1, 00:13:13.558 "max_cntlid": 65519, 00:13:13.558 "namespaces": [ 00:13:13.558 { 00:13:13.558 "nsid": 1, 00:13:13.558 "bdev_name": "Malloc2", 00:13:13.558 "name": "Malloc2", 00:13:13.558 "nguid": "AA20FC6FD63546EF8991BB68140C9366", 00:13:13.558 "uuid": "aa20fc6f-d635-46ef-8991-bb68140c9366" 00:13:13.558 } 00:13:13.558 ] 00:13:13.558 } 00:13:13.558 ] 00:13:13.558 21:03:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:13.558 21:03:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1873144 00:13:13.558 21:03:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:13.558 21:03:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:13.558 21:03:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:13.558 21:03:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:13.558 21:03:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:13.558 21:03:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:13.558 21:03:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:13.558 21:03:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:13.558 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.820 Malloc4 00:13:13.820 [2024-07-15 21:03:40.930679] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:13.820 21:03:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:13.820 [2024-07-15 21:03:41.098784] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:14.081 21:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:14.081 Asynchronous Event Request test 00:13:14.081 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:14.081 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:14.081 Registering asynchronous event callbacks... 00:13:14.081 Starting namespace attribute notice tests for all controllers... 00:13:14.081 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:14.081 aer_cb - Changed Namespace 00:13:14.081 Cleaning up... 00:13:14.081 [ 00:13:14.081 { 00:13:14.081 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:14.081 "subtype": "Discovery", 00:13:14.081 "listen_addresses": [], 00:13:14.081 "allow_any_host": true, 00:13:14.081 "hosts": [] 00:13:14.081 }, 00:13:14.081 { 00:13:14.081 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:14.081 "subtype": "NVMe", 00:13:14.081 "listen_addresses": [ 00:13:14.081 { 00:13:14.081 "trtype": "VFIOUSER", 00:13:14.081 "adrfam": "IPv4", 00:13:14.081 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:14.081 "trsvcid": "0" 00:13:14.081 } 00:13:14.081 ], 00:13:14.081 "allow_any_host": true, 00:13:14.081 "hosts": [], 00:13:14.081 "serial_number": "SPDK1", 00:13:14.081 "model_number": "SPDK bdev Controller", 00:13:14.081 "max_namespaces": 32, 00:13:14.081 "min_cntlid": 1, 00:13:14.081 "max_cntlid": 65519, 00:13:14.081 "namespaces": [ 00:13:14.081 { 00:13:14.081 "nsid": 1, 00:13:14.081 "bdev_name": "Malloc1", 00:13:14.081 "name": "Malloc1", 00:13:14.081 "nguid": "C910EA890A03451595E9886510ACB6D2", 00:13:14.081 "uuid": "c910ea89-0a03-4515-95e9-886510acb6d2" 00:13:14.081 }, 00:13:14.081 { 00:13:14.081 "nsid": 2, 00:13:14.081 "bdev_name": "Malloc3", 00:13:14.081 "name": "Malloc3", 00:13:14.081 "nguid": "6585909459AE420DB2FD341CD8950842", 00:13:14.081 "uuid": "65859094-59ae-420d-b2fd-341cd8950842" 00:13:14.081 } 00:13:14.081 ] 00:13:14.081 }, 00:13:14.081 { 00:13:14.081 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:14.081 "subtype": "NVMe", 00:13:14.081 "listen_addresses": [ 00:13:14.081 { 00:13:14.081 "trtype": "VFIOUSER", 00:13:14.081 "adrfam": "IPv4", 00:13:14.081 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:14.081 "trsvcid": "0" 00:13:14.081 } 00:13:14.081 ], 00:13:14.081 "allow_any_host": true, 00:13:14.081 "hosts": [], 00:13:14.081 "serial_number": "SPDK2", 00:13:14.081 "model_number": "SPDK bdev Controller", 00:13:14.081 "max_namespaces": 32, 00:13:14.081 "min_cntlid": 1, 00:13:14.081 "max_cntlid": 65519, 00:13:14.081 "namespaces": [ 00:13:14.081 { 00:13:14.081 "nsid": 1, 00:13:14.081 "bdev_name": "Malloc2", 00:13:14.081 "name": "Malloc2", 00:13:14.081 "nguid": "AA20FC6FD63546EF8991BB68140C9366", 00:13:14.081 "uuid": "aa20fc6f-d635-46ef-8991-bb68140c9366" 00:13:14.081 }, 00:13:14.081 { 00:13:14.081 "nsid": 2, 00:13:14.081 "bdev_name": "Malloc4", 00:13:14.081 "name": "Malloc4", 00:13:14.081 "nguid": "64AF33E1D5C549DD88374350905B54EC", 00:13:14.081 "uuid": "64af33e1-d5c5-49dd-8837-4350905b54ec" 00:13:14.081 } 00:13:14.081 ] 00:13:14.081 } 00:13:14.081 ] 00:13:14.081 21:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1873144 00:13:14.081 21:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:14.081 21:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1863505 00:13:14.081 21:03:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1863505 ']' 00:13:14.081 21:03:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1863505 00:13:14.081 21:03:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:14.081 21:03:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:14.081 21:03:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1863505 00:13:14.081 21:03:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:14.081 21:03:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:14.081 21:03:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1863505' 00:13:14.081 killing process with pid 1863505 00:13:14.081 21:03:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1863505 00:13:14.081 21:03:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1863505 00:13:14.342 21:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:14.342 21:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:14.342 21:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:14.342 21:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:14.342 21:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:14.342 21:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1873308 00:13:14.342 21:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1873308' 00:13:14.342 Process pid: 1873308 00:13:14.342 21:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:14.342 21:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:14.342 21:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1873308 00:13:14.342 21:03:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1873308 ']' 00:13:14.342 21:03:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.342 21:03:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:14.342 21:03:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.342 21:03:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:14.342 21:03:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:14.342 [2024-07-15 21:03:41.579414] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:14.342 [2024-07-15 21:03:41.580330] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:13:14.342 [2024-07-15 21:03:41.580371] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.342 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.603 [2024-07-15 21:03:41.648553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.603 [2024-07-15 21:03:41.714747] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.603 [2024-07-15 21:03:41.714787] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.603 [2024-07-15 21:03:41.714794] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.603 [2024-07-15 21:03:41.714801] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.603 [2024-07-15 21:03:41.714806] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.603 [2024-07-15 21:03:41.714954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.603 [2024-07-15 21:03:41.715082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.603 [2024-07-15 21:03:41.715254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.603 [2024-07-15 21:03:41.715255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.603 [2024-07-15 21:03:41.791301] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:14.603 [2024-07-15 21:03:41.791352] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:14.603 [2024-07-15 21:03:41.792391] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:14.603 [2024-07-15 21:03:41.792787] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:14.603 [2024-07-15 21:03:41.792883] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:15.173 21:03:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:15.173 21:03:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:15.173 21:03:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:16.116 21:03:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:16.376 21:03:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:16.376 21:03:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:16.376 21:03:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:16.376 21:03:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:16.376 21:03:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:16.636 Malloc1 00:13:16.636 21:03:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:16.636 21:03:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:16.895 21:03:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:17.155 21:03:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:17.155 21:03:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:17.155 21:03:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:17.155 Malloc2 00:13:17.155 21:03:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:17.415 21:03:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:17.676 21:03:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:17.676 21:03:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:17.676 21:03:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1873308 00:13:17.676 21:03:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1873308 ']' 00:13:17.676 21:03:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1873308 00:13:17.676 21:03:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:17.676 21:03:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:17.676 21:03:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1873308 00:13:17.935 21:03:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:17.935 21:03:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:17.935 21:03:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1873308' 00:13:17.935 killing process with pid 1873308 00:13:17.935 21:03:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1873308 00:13:17.935 21:03:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1873308 00:13:17.935 21:03:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:17.935 21:03:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:17.935 00:13:17.935 real 0m50.650s 00:13:17.935 user 3m20.629s 00:13:17.935 sys 0m3.027s 00:13:17.935 21:03:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:17.935 21:03:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:17.935 ************************************ 00:13:17.935 END TEST nvmf_vfio_user 00:13:17.935 ************************************ 00:13:17.935 21:03:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:17.935 21:03:45 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:17.935 21:03:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:17.935 21:03:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.935 21:03:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:18.196 ************************************ 00:13:18.196 START TEST nvmf_vfio_user_nvme_compliance 00:13:18.196 ************************************ 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:18.196 * Looking for test storage... 00:13:18.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1874222 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1874222' 00:13:18.196 Process pid: 1874222 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1874222 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1874222 ']' 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.196 21:03:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:18.196 [2024-07-15 21:03:45.423753] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:13:18.196 [2024-07-15 21:03:45.423822] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.196 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.456 [2024-07-15 21:03:45.494043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:18.456 [2024-07-15 21:03:45.559064] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.456 [2024-07-15 21:03:45.559101] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.456 [2024-07-15 21:03:45.559109] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.456 [2024-07-15 21:03:45.559115] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.456 [2024-07-15 21:03:45.559121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.456 [2024-07-15 21:03:45.559276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.456 [2024-07-15 21:03:45.559544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.456 [2024-07-15 21:03:45.559547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.027 21:03:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:19.027 21:03:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:19.027 21:03:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:19.968 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:19.968 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:19.968 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:19.968 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.968 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:19.968 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.968 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:19.968 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:19.968 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.968 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:19.968 malloc0 00:13:19.968 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.968 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:19.968 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.968 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:20.227 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.227 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:20.227 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.227 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:20.227 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.227 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:20.227 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.227 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:20.227 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.227 21:03:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:20.227 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.227 00:13:20.227 00:13:20.227 CUnit - A unit testing framework for C - Version 2.1-3 00:13:20.227 http://cunit.sourceforge.net/ 00:13:20.227 00:13:20.227 00:13:20.227 Suite: nvme_compliance 00:13:20.227 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 21:03:47.461699] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.227 [2024-07-15 21:03:47.463050] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:20.227 [2024-07-15 21:03:47.463060] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:20.227 [2024-07-15 21:03:47.463064] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:20.227 [2024-07-15 21:03:47.464727] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.227 passed 00:13:20.486 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 21:03:47.560315] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.486 [2024-07-15 21:03:47.563331] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.486 passed 00:13:20.486 Test: admin_identify_ns ...[2024-07-15 21:03:47.658507] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.486 [2024-07-15 21:03:47.722246] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:20.486 [2024-07-15 21:03:47.730245] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:20.486 [2024-07-15 21:03:47.751358] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.746 passed 00:13:20.746 Test: admin_get_features_mandatory_features ...[2024-07-15 21:03:47.841982] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.746 [2024-07-15 21:03:47.845002] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.746 passed 00:13:20.746 Test: admin_get_features_optional_features ...[2024-07-15 21:03:47.940536] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.746 [2024-07-15 21:03:47.943547] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.746 passed 00:13:20.746 Test: admin_set_features_number_of_queues ...[2024-07-15 21:03:48.035489] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.006 [2024-07-15 21:03:48.140338] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.006 passed 00:13:21.006 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 21:03:48.234472] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.006 [2024-07-15 21:03:48.237491] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.006 passed 00:13:21.266 Test: admin_get_log_page_with_lpo ...[2024-07-15 21:03:48.331489] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.266 [2024-07-15 21:03:48.399244] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:21.266 [2024-07-15 21:03:48.412290] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.266 passed 00:13:21.266 Test: fabric_property_get ...[2024-07-15 21:03:48.506365] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.266 [2024-07-15 21:03:48.507615] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:21.266 [2024-07-15 21:03:48.509387] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.266 passed 00:13:21.525 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 21:03:48.603892] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.525 [2024-07-15 21:03:48.605158] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:21.525 [2024-07-15 21:03:48.606919] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.525 passed 00:13:21.525 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 21:03:48.700471] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.525 [2024-07-15 21:03:48.784245] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:21.525 [2024-07-15 21:03:48.800239] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:21.525 [2024-07-15 21:03:48.805321] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.785 passed 00:13:21.785 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 21:03:48.897321] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.785 [2024-07-15 21:03:48.898559] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:21.785 [2024-07-15 21:03:48.900339] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.785 passed 00:13:21.785 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 21:03:48.995510] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.785 [2024-07-15 21:03:49.072238] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:22.044 [2024-07-15 21:03:49.096234] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:22.045 [2024-07-15 21:03:49.101309] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.045 passed 00:13:22.045 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 21:03:49.192903] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.045 [2024-07-15 21:03:49.194138] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:22.045 [2024-07-15 21:03:49.194158] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:22.045 [2024-07-15 21:03:49.195915] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.045 passed 00:13:22.045 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 21:03:49.288467] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.304 [2024-07-15 21:03:49.380237] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:22.304 [2024-07-15 21:03:49.388244] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:22.304 [2024-07-15 21:03:49.396238] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:22.304 [2024-07-15 21:03:49.404234] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:22.304 [2024-07-15 21:03:49.433318] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.304 passed 00:13:22.304 Test: admin_create_io_sq_verify_pc ...[2024-07-15 21:03:49.525321] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.304 [2024-07-15 21:03:49.543246] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:22.304 [2024-07-15 21:03:49.560483] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.564 passed 00:13:22.564 Test: admin_create_io_qp_max_qps ...[2024-07-15 21:03:49.649974] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.504 [2024-07-15 21:03:50.762241] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:24.072 [2024-07-15 21:03:51.142675] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.072 passed 00:13:24.072 Test: admin_create_io_sq_shared_cq ...[2024-07-15 21:03:51.233467] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:24.332 [2024-07-15 21:03:51.365237] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:24.332 [2024-07-15 21:03:51.402295] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.332 passed 00:13:24.332 00:13:24.332 Run Summary: Type Total Ran Passed Failed Inactive 00:13:24.332 suites 1 1 n/a 0 0 00:13:24.332 tests 18 18 18 0 0 00:13:24.332 asserts 360 360 360 0 n/a 00:13:24.332 00:13:24.332 Elapsed time = 1.653 seconds 00:13:24.332 21:03:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1874222 00:13:24.332 21:03:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1874222 ']' 00:13:24.332 21:03:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1874222 00:13:24.332 21:03:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:24.332 21:03:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:24.332 21:03:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1874222 00:13:24.332 21:03:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:24.332 21:03:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:24.332 21:03:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1874222' 00:13:24.332 killing process with pid 1874222 00:13:24.332 21:03:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1874222 00:13:24.332 21:03:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1874222 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:24.592 00:13:24.592 real 0m6.429s 00:13:24.592 user 0m18.387s 00:13:24.592 sys 0m0.463s 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:24.592 ************************************ 00:13:24.592 END TEST nvmf_vfio_user_nvme_compliance 00:13:24.592 ************************************ 00:13:24.592 21:03:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:24.592 21:03:51 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:24.592 21:03:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:24.592 21:03:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:24.592 21:03:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:24.592 ************************************ 00:13:24.592 START TEST nvmf_vfio_user_fuzz 00:13:24.592 ************************************ 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:24.592 * Looking for test storage... 00:13:24.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.592 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1875530 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1875530' 00:13:24.593 Process pid: 1875530 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1875530 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1875530 ']' 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:24.593 21:03:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:25.533 21:03:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:25.533 21:03:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:25.533 21:03:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:26.474 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:26.474 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.474 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:26.474 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.474 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:26.474 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:26.474 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.474 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:26.474 malloc0 00:13:26.474 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.474 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:26.474 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.474 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:26.474 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.474 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:26.474 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.474 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:26.734 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.734 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:26.734 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.734 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:26.734 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.734 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:26.734 21:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:58.830 Fuzzing completed. Shutting down the fuzz application 00:13:58.830 00:13:58.830 Dumping successful admin opcodes: 00:13:58.830 8, 9, 10, 24, 00:13:58.830 Dumping successful io opcodes: 00:13:58.830 0, 00:13:58.830 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1140540, total successful commands: 4493, random_seed: 1591646528 00:13:58.830 NS: 0x200003a1ef00 admin qp, Total commands completed: 143568, total successful commands: 1168, random_seed: 4180997376 00:13:58.830 21:04:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:58.830 21:04:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.830 21:04:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:58.830 21:04:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.830 21:04:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1875530 00:13:58.830 21:04:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1875530 ']' 00:13:58.830 21:04:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1875530 00:13:58.830 21:04:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:58.830 21:04:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:58.830 21:04:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1875530 00:13:58.830 21:04:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:58.830 21:04:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:58.830 21:04:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1875530' 00:13:58.830 killing process with pid 1875530 00:13:58.830 21:04:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1875530 00:13:58.830 21:04:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1875530 00:13:58.830 21:04:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:58.830 21:04:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:58.830 00:13:58.830 real 0m33.701s 00:13:58.830 user 0m37.998s 00:13:58.830 sys 0m26.226s 00:13:58.830 21:04:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:58.830 21:04:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:58.830 ************************************ 00:13:58.830 END TEST nvmf_vfio_user_fuzz 00:13:58.830 ************************************ 00:13:58.830 21:04:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:58.830 21:04:25 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:58.830 21:04:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:58.830 21:04:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:58.830 21:04:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:58.830 ************************************ 00:13:58.830 START TEST nvmf_host_management 00:13:58.830 ************************************ 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:58.830 * Looking for test storage... 00:13:58.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.830 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:58.831 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:58.831 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:58.831 21:04:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:58.831 21:04:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:58.831 21:04:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:58.831 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:58.831 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.831 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:58.831 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:58.831 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:58.831 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.831 21:04:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.831 21:04:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.831 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:58.831 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:58.831 21:04:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:58.831 21:04:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:06.963 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:06.963 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:06.963 Found net devices under 0000:31:00.0: cvl_0_0 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:06.963 Found net devices under 0000:31:00.1: cvl_0_1 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:06.963 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:06.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:14:06.963 00:14:06.963 --- 10.0.0.2 ping statistics --- 00:14:06.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.963 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:06.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:14:06.964 00:14:06.964 --- 10.0.0.1 ping statistics --- 00:14:06.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.964 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1886291 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1886291 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1886291 ']' 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.964 21:04:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:06.964 [2024-07-15 21:04:33.815204] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:06.964 [2024-07-15 21:04:33.815274] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.964 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.964 [2024-07-15 21:04:33.882100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.964 [2024-07-15 21:04:33.949297] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.964 [2024-07-15 21:04:33.949345] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.964 [2024-07-15 21:04:33.949351] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.964 [2024-07-15 21:04:33.949357] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.964 [2024-07-15 21:04:33.949361] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.964 [2024-07-15 21:04:33.949481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.964 [2024-07-15 21:04:33.949757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.964 [2024-07-15 21:04:33.952253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:06.964 [2024-07-15 21:04:33.952414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:06.964 [2024-07-15 21:04:34.093159] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:06.964 Malloc0 00:14:06.964 [2024-07-15 21:04:34.156460] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1886356 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1886356 /var/tmp/bdevperf.sock 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1886356 ']' 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:06.964 { 00:14:06.964 "params": { 00:14:06.964 "name": "Nvme$subsystem", 00:14:06.964 "trtype": "$TEST_TRANSPORT", 00:14:06.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:06.964 "adrfam": "ipv4", 00:14:06.964 "trsvcid": "$NVMF_PORT", 00:14:06.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:06.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:06.964 "hdgst": ${hdgst:-false}, 00:14:06.964 "ddgst": ${ddgst:-false} 00:14:06.964 }, 00:14:06.964 "method": "bdev_nvme_attach_controller" 00:14:06.964 } 00:14:06.964 EOF 00:14:06.964 )") 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:06.964 21:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:06.964 "params": { 00:14:06.964 "name": "Nvme0", 00:14:06.964 "trtype": "tcp", 00:14:06.964 "traddr": "10.0.0.2", 00:14:06.964 "adrfam": "ipv4", 00:14:06.964 "trsvcid": "4420", 00:14:06.964 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:06.964 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:06.964 "hdgst": false, 00:14:06.964 "ddgst": false 00:14:06.964 }, 00:14:06.964 "method": "bdev_nvme_attach_controller" 00:14:06.964 }' 00:14:07.225 [2024-07-15 21:04:34.259054] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:07.225 [2024-07-15 21:04:34.259103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1886356 ] 00:14:07.225 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.225 [2024-07-15 21:04:34.324946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.225 [2024-07-15 21:04:34.390012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.485 Running I/O for 10 seconds... 00:14:07.758 21:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:07.758 21:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:07.758 21:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:07.758 21:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.758 21:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:07.758 21:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.758 21:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:07.758 21:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:07.758 21:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:07.758 21:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:07.758 21:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:07.758 21:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:07.758 21:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:07.758 21:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:08.057 21:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:08.057 21:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:08.057 21:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.057 21:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:08.057 21:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.057 21:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=782 00:14:08.057 21:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 782 -ge 100 ']' 00:14:08.057 21:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:08.057 21:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:08.057 21:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:08.057 21:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:08.057 21:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.057 21:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:08.057 [2024-07-15 21:04:35.095429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa68c50 is same with the state(5) to be set 00:14:08.057 [2024-07-15 21:04:35.095473] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa68c50 is same with the state(5) to be set 00:14:08.057 [2024-07-15 21:04:35.095482] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa68c50 is same with the state(5) to be set 00:14:08.057 [2024-07-15 21:04:35.095489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa68c50 is same with the state(5) to be set 00:14:08.057 [2024-07-15 21:04:35.095496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa68c50 is same with the state(5) to be set 00:14:08.057 [2024-07-15 21:04:35.095502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa68c50 is same with the state(5) to be set 00:14:08.057 [2024-07-15 21:04:35.095509] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa68c50 is same with the state(5) to be set 00:14:08.057 [2024-07-15 21:04:35.095516] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa68c50 is same with the state(5) to be set 00:14:08.057 [2024-07-15 21:04:35.095522] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa68c50 is same with the state(5) to be set 00:14:08.057 [2024-07-15 21:04:35.095528] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa68c50 is same with the state(5) to be set 00:14:08.057 [2024-07-15 21:04:35.095534] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa68c50 is same with the state(5) to be set 00:14:08.057 [2024-07-15 21:04:35.095540] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa68c50 is same with the state(5) to be set 00:14:08.057 [2024-07-15 21:04:35.095547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa68c50 is same with the state(5) to be set 00:14:08.057 [2024-07-15 21:04:35.095558] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa68c50 is same with the state(5) to be set 00:14:08.057 [2024-07-15 21:04:35.095565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa68c50 is same with the state(5) to be set 00:14:08.057 21:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.057 21:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:08.057 21:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.057 21:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:08.057 [2024-07-15 21:04:35.106660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.057 [2024-07-15 21:04:35.106696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.057 [2024-07-15 21:04:35.106706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.057 [2024-07-15 21:04:35.106713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.057 [2024-07-15 21:04:35.106721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.057 [2024-07-15 21:04:35.106728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.057 [2024-07-15 21:04:35.106736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.057 [2024-07-15 21:04:35.106742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.057 [2024-07-15 21:04:35.106750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194e6e0 is same with the state(5) to be set 00:14:08.057 [2024-07-15 21:04:35.106825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.057 [2024-07-15 21:04:35.106836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.106851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.106859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.106868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.106875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.106885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.106892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.106901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.106909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.106918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.106925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.106938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.106946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.106955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.106963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.106972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.106980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.106989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.106996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.058 [2024-07-15 21:04:35.107529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.058 [2024-07-15 21:04:35.107536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.059 [2024-07-15 21:04:35.107896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.059 [2024-07-15 21:04:35.107947] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d5f560 was disconnected and freed. reset controller. 00:14:08.059 [2024-07-15 21:04:35.109123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:08.059 task offset: 114688 on job bdev=Nvme0n1 fails 00:14:08.059 00:14:08.059 Latency(us) 00:14:08.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.059 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:08.059 Job: Nvme0n1 ended in about 0.58 seconds with error 00:14:08.059 Verification LBA range: start 0x0 length 0x400 00:14:08.059 Nvme0n1 : 0.58 1556.94 97.31 111.21 0.00 37448.09 1720.32 32549.55 00:14:08.059 =================================================================================================================== 00:14:08.059 Total : 1556.94 97.31 111.21 0.00 37448.09 1720.32 32549.55 00:14:08.059 [2024-07-15 21:04:35.111089] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:08.059 [2024-07-15 21:04:35.111110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194e6e0 (9): Bad file descriptor 00:14:08.059 21:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.059 21:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:08.059 [2024-07-15 21:04:35.163694] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:09.059 21:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1886356 00:14:09.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1886356) - No such process 00:14:09.059 21:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:09.059 21:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:09.059 21:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:09.059 21:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:09.059 21:04:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:09.059 21:04:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:09.059 21:04:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:09.059 21:04:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:09.059 { 00:14:09.059 "params": { 00:14:09.059 "name": "Nvme$subsystem", 00:14:09.059 "trtype": "$TEST_TRANSPORT", 00:14:09.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:09.059 "adrfam": "ipv4", 00:14:09.059 "trsvcid": "$NVMF_PORT", 00:14:09.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:09.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:09.059 "hdgst": ${hdgst:-false}, 00:14:09.059 "ddgst": ${ddgst:-false} 00:14:09.059 }, 00:14:09.059 "method": "bdev_nvme_attach_controller" 00:14:09.059 } 00:14:09.059 EOF 00:14:09.059 )") 00:14:09.059 21:04:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:09.059 21:04:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:09.059 21:04:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:09.059 21:04:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:09.059 "params": { 00:14:09.059 "name": "Nvme0", 00:14:09.059 "trtype": "tcp", 00:14:09.059 "traddr": "10.0.0.2", 00:14:09.059 "adrfam": "ipv4", 00:14:09.059 "trsvcid": "4420", 00:14:09.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:09.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:09.059 "hdgst": false, 00:14:09.059 "ddgst": false 00:14:09.059 }, 00:14:09.059 "method": "bdev_nvme_attach_controller" 00:14:09.059 }' 00:14:09.059 [2024-07-15 21:04:36.171819] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:09.059 [2024-07-15 21:04:36.171874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1886827 ] 00:14:09.059 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.059 [2024-07-15 21:04:36.237583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.059 [2024-07-15 21:04:36.302197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.320 Running I/O for 1 seconds... 00:14:10.703 00:14:10.703 Latency(us) 00:14:10.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.703 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:10.703 Verification LBA range: start 0x0 length 0x400 00:14:10.703 Nvme0n1 : 1.03 1548.89 96.81 0.00 0.00 40628.45 7645.87 35170.99 00:14:10.703 =================================================================================================================== 00:14:10.703 Total : 1548.89 96.81 0.00 0.00 40628.45 7645.87 35170.99 00:14:10.703 21:04:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:10.703 21:04:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:10.703 21:04:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:10.703 21:04:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:10.703 21:04:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:10.703 21:04:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:10.703 21:04:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:10.703 21:04:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:10.703 21:04:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:10.703 21:04:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:10.703 21:04:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:10.703 rmmod nvme_tcp 00:14:10.703 rmmod nvme_fabrics 00:14:10.703 rmmod nvme_keyring 00:14:10.704 21:04:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:10.704 21:04:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:10.704 21:04:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:10.704 21:04:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1886291 ']' 00:14:10.704 21:04:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1886291 00:14:10.704 21:04:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1886291 ']' 00:14:10.704 21:04:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1886291 00:14:10.704 21:04:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:14:10.704 21:04:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:10.704 21:04:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1886291 00:14:10.704 21:04:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:10.704 21:04:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:10.704 21:04:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1886291' 00:14:10.704 killing process with pid 1886291 00:14:10.704 21:04:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1886291 00:14:10.704 21:04:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1886291 00:14:10.964 [2024-07-15 21:04:38.016814] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:10.964 21:04:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:10.964 21:04:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:10.964 21:04:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:10.964 21:04:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:10.964 21:04:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:10.964 21:04:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.964 21:04:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.964 21:04:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.876 21:04:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:12.876 21:04:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:12.876 00:14:12.876 real 0m14.606s 00:14:12.876 user 0m21.216s 00:14:12.876 sys 0m6.862s 00:14:12.876 21:04:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:12.876 21:04:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:12.876 ************************************ 00:14:12.876 END TEST nvmf_host_management 00:14:12.876 ************************************ 00:14:12.876 21:04:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:12.876 21:04:40 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:12.876 21:04:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:12.876 21:04:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:12.876 21:04:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:13.137 ************************************ 00:14:13.137 START TEST nvmf_lvol 00:14:13.137 ************************************ 00:14:13.137 21:04:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:13.137 * Looking for test storage... 00:14:13.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:13.137 21:04:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.137 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:13.137 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:13.138 21:04:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:21.279 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:21.279 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:21.279 Found net devices under 0000:31:00.0: cvl_0_0 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:21.279 Found net devices under 0000:31:00.1: cvl_0_1 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.279 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.540 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:21.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:14:21.540 00:14:21.540 --- 10.0.0.2 ping statistics --- 00:14:21.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.540 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:14:21.540 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:14:21.541 00:14:21.541 --- 10.0.0.1 ping statistics --- 00:14:21.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.541 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1891935 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1891935 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1891935 ']' 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:21.541 21:04:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:21.541 [2024-07-15 21:04:48.683001] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:21.541 [2024-07-15 21:04:48.683063] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.541 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.541 [2024-07-15 21:04:48.765389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:21.801 [2024-07-15 21:04:48.841036] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.801 [2024-07-15 21:04:48.841078] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.801 [2024-07-15 21:04:48.841085] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.801 [2024-07-15 21:04:48.841092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.801 [2024-07-15 21:04:48.841097] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.801 [2024-07-15 21:04:48.841253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.801 [2024-07-15 21:04:48.841343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.801 [2024-07-15 21:04:48.841527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.371 21:04:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:22.371 21:04:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:22.371 21:04:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:22.371 21:04:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:22.371 21:04:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:22.371 21:04:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.371 21:04:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:22.371 [2024-07-15 21:04:49.645644] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.631 21:04:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:22.631 21:04:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:22.631 21:04:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:22.892 21:04:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:22.892 21:04:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:23.152 21:04:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:23.152 21:04:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ce8ad39e-f04c-445d-a18b-78dc733617cb 00:14:23.152 21:04:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ce8ad39e-f04c-445d-a18b-78dc733617cb lvol 20 00:14:23.413 21:04:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e0390d94-81a4-4c94-9d6c-9e4422446039 00:14:23.413 21:04:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:23.674 21:04:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e0390d94-81a4-4c94-9d6c-9e4422446039 00:14:23.674 21:04:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:23.935 [2024-07-15 21:04:51.020206] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.935 21:04:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:23.935 21:04:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1892410 00:14:23.935 21:04:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:23.935 21:04:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:24.222 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.164 21:04:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e0390d94-81a4-4c94-9d6c-9e4422446039 MY_SNAPSHOT 00:14:25.164 21:04:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ce99d10d-52d6-413d-8855-33e88242ef28 00:14:25.164 21:04:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e0390d94-81a4-4c94-9d6c-9e4422446039 30 00:14:25.424 21:04:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ce99d10d-52d6-413d-8855-33e88242ef28 MY_CLONE 00:14:25.685 21:04:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8024fc22-601b-4c91-9830-ed44c9544045 00:14:25.685 21:04:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8024fc22-601b-4c91-9830-ed44c9544045 00:14:25.946 21:04:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1892410 00:14:35.951 Initializing NVMe Controllers 00:14:35.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:35.951 Controller IO queue size 128, less than required. 00:14:35.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:35.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:35.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:35.951 Initialization complete. Launching workers. 00:14:35.951 ======================================================== 00:14:35.951 Latency(us) 00:14:35.951 Device Information : IOPS MiB/s Average min max 00:14:35.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 18257.40 71.32 7011.95 1407.56 57804.63 00:14:35.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12523.60 48.92 10224.86 3497.76 52383.47 00:14:35.951 ======================================================== 00:14:35.951 Total : 30781.00 120.24 8319.16 1407.56 57804.63 00:14:35.951 00:14:35.951 21:05:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:35.951 21:05:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e0390d94-81a4-4c94-9d6c-9e4422446039 00:14:35.951 21:05:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ce8ad39e-f04c-445d-a18b-78dc733617cb 00:14:35.951 21:05:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:35.951 21:05:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:35.951 21:05:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:35.951 21:05:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:35.951 21:05:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:35.951 21:05:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:35.951 21:05:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:35.951 21:05:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:35.951 21:05:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:35.951 rmmod nvme_tcp 00:14:35.951 rmmod nvme_fabrics 00:14:35.951 rmmod nvme_keyring 00:14:35.951 21:05:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1891935 ']' 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1891935 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1891935 ']' 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1891935 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1891935 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1891935' 00:14:35.951 killing process with pid 1891935 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1891935 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1891935 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.951 21:05:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:37.354 00:14:37.354 real 0m24.096s 00:14:37.354 user 1m3.496s 00:14:37.354 sys 0m8.466s 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:37.354 ************************************ 00:14:37.354 END TEST nvmf_lvol 00:14:37.354 ************************************ 00:14:37.354 21:05:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:37.354 21:05:04 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:37.354 21:05:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:37.354 21:05:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:37.354 21:05:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:37.354 ************************************ 00:14:37.354 START TEST nvmf_lvs_grow 00:14:37.354 ************************************ 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:37.354 * Looking for test storage... 00:14:37.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:37.354 21:05:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:45.493 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:45.493 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.493 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:45.493 Found net devices under 0000:31:00.0: cvl_0_0 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:45.494 Found net devices under 0000:31:00.1: cvl_0_1 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:45.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.722 ms 00:14:45.494 00:14:45.494 --- 10.0.0.2 ping statistics --- 00:14:45.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.494 rtt min/avg/max/mdev = 0.722/0.722/0.722/0.000 ms 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:45.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:14:45.494 00:14:45.494 --- 10.0.0.1 ping statistics --- 00:14:45.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.494 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1899236 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1899236 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1899236 ']' 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:45.494 21:05:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:45.494 [2024-07-15 21:05:12.625267] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:45.494 [2024-07-15 21:05:12.625331] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.494 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.494 [2024-07-15 21:05:12.708724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.755 [2024-07-15 21:05:12.781767] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.755 [2024-07-15 21:05:12.781811] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.755 [2024-07-15 21:05:12.781819] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.755 [2024-07-15 21:05:12.781826] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.755 [2024-07-15 21:05:12.781832] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.755 [2024-07-15 21:05:12.781854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.326 21:05:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:46.326 21:05:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:46.326 21:05:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:46.326 21:05:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:46.326 21:05:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:46.326 21:05:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.326 21:05:13 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:46.326 [2024-07-15 21:05:13.585433] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.586 21:05:13 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:46.586 21:05:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:46.586 21:05:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.586 21:05:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:46.586 ************************************ 00:14:46.586 START TEST lvs_grow_clean 00:14:46.586 ************************************ 00:14:46.586 21:05:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:46.586 21:05:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:46.586 21:05:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:46.586 21:05:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:46.586 21:05:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:46.586 21:05:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:46.586 21:05:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:46.586 21:05:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:46.586 21:05:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:46.586 21:05:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:46.586 21:05:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:46.586 21:05:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:46.847 21:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=504f0746-37f7-4292-bd4c-3ecbdb1252ee 00:14:46.847 21:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 504f0746-37f7-4292-bd4c-3ecbdb1252ee 00:14:46.847 21:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:47.107 21:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:47.107 21:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:47.107 21:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 504f0746-37f7-4292-bd4c-3ecbdb1252ee lvol 150 00:14:47.107 21:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d7cbbaa4-df33-46c0-8587-4a64ae2793c7 00:14:47.107 21:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:47.107 21:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:47.367 [2024-07-15 21:05:14.467829] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:47.367 [2024-07-15 21:05:14.467884] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:47.367 true 00:14:47.367 21:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 504f0746-37f7-4292-bd4c-3ecbdb1252ee 00:14:47.367 21:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:47.367 21:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:47.367 21:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:47.627 21:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d7cbbaa4-df33-46c0-8587-4a64ae2793c7 00:14:47.887 21:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:47.887 [2024-07-15 21:05:15.085724] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.887 21:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:48.148 21:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1899810 00:14:48.148 21:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:48.148 21:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1899810 /var/tmp/bdevperf.sock 00:14:48.148 21:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1899810 ']' 00:14:48.148 21:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:48.148 21:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:48.148 21:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:48.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:48.148 21:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:48.148 21:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:48.148 21:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:48.148 [2024-07-15 21:05:15.309536] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:48.148 [2024-07-15 21:05:15.309589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1899810 ] 00:14:48.148 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.148 [2024-07-15 21:05:15.391779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.408 [2024-07-15 21:05:15.456047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.980 21:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.980 21:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:48.980 21:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:49.240 Nvme0n1 00:14:49.241 21:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:49.501 [ 00:14:49.501 { 00:14:49.501 "name": "Nvme0n1", 00:14:49.501 "aliases": [ 00:14:49.501 "d7cbbaa4-df33-46c0-8587-4a64ae2793c7" 00:14:49.501 ], 00:14:49.501 "product_name": "NVMe disk", 00:14:49.501 "block_size": 4096, 00:14:49.501 "num_blocks": 38912, 00:14:49.501 "uuid": "d7cbbaa4-df33-46c0-8587-4a64ae2793c7", 00:14:49.501 "assigned_rate_limits": { 00:14:49.501 "rw_ios_per_sec": 0, 00:14:49.501 "rw_mbytes_per_sec": 0, 00:14:49.501 "r_mbytes_per_sec": 0, 00:14:49.501 "w_mbytes_per_sec": 0 00:14:49.501 }, 00:14:49.501 "claimed": false, 00:14:49.501 "zoned": false, 00:14:49.501 "supported_io_types": { 00:14:49.501 "read": true, 00:14:49.501 "write": true, 00:14:49.501 "unmap": true, 00:14:49.501 "flush": true, 00:14:49.501 "reset": true, 00:14:49.501 "nvme_admin": true, 00:14:49.501 "nvme_io": true, 00:14:49.501 "nvme_io_md": false, 00:14:49.501 "write_zeroes": true, 00:14:49.501 "zcopy": false, 00:14:49.501 "get_zone_info": false, 00:14:49.501 "zone_management": false, 00:14:49.501 "zone_append": false, 00:14:49.501 "compare": true, 00:14:49.501 "compare_and_write": true, 00:14:49.501 "abort": true, 00:14:49.501 "seek_hole": false, 00:14:49.501 "seek_data": false, 00:14:49.501 "copy": true, 00:14:49.501 "nvme_iov_md": false 00:14:49.501 }, 00:14:49.501 "memory_domains": [ 00:14:49.501 { 00:14:49.501 "dma_device_id": "system", 00:14:49.501 "dma_device_type": 1 00:14:49.501 } 00:14:49.501 ], 00:14:49.501 "driver_specific": { 00:14:49.501 "nvme": [ 00:14:49.501 { 00:14:49.501 "trid": { 00:14:49.501 "trtype": "TCP", 00:14:49.501 "adrfam": "IPv4", 00:14:49.501 "traddr": "10.0.0.2", 00:14:49.501 "trsvcid": "4420", 00:14:49.501 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:49.501 }, 00:14:49.502 "ctrlr_data": { 00:14:49.502 "cntlid": 1, 00:14:49.502 "vendor_id": "0x8086", 00:14:49.502 "model_number": "SPDK bdev Controller", 00:14:49.502 "serial_number": "SPDK0", 00:14:49.502 "firmware_revision": "24.09", 00:14:49.502 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:49.502 "oacs": { 00:14:49.502 "security": 0, 00:14:49.502 "format": 0, 00:14:49.502 "firmware": 0, 00:14:49.502 "ns_manage": 0 00:14:49.502 }, 00:14:49.502 "multi_ctrlr": true, 00:14:49.502 "ana_reporting": false 00:14:49.502 }, 00:14:49.502 "vs": { 00:14:49.502 "nvme_version": "1.3" 00:14:49.502 }, 00:14:49.502 "ns_data": { 00:14:49.502 "id": 1, 00:14:49.502 "can_share": true 00:14:49.502 } 00:14:49.502 } 00:14:49.502 ], 00:14:49.502 "mp_policy": "active_passive" 00:14:49.502 } 00:14:49.502 } 00:14:49.502 ] 00:14:49.502 21:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1900149 00:14:49.502 21:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:49.502 21:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:49.502 Running I/O for 10 seconds... 00:14:50.444 Latency(us) 00:14:50.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.444 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.444 Nvme0n1 : 1.00 18048.00 70.50 0.00 0.00 0.00 0.00 0.00 00:14:50.444 =================================================================================================================== 00:14:50.444 Total : 18048.00 70.50 0.00 0.00 0.00 0.00 0.00 00:14:50.444 00:14:51.386 21:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 504f0746-37f7-4292-bd4c-3ecbdb1252ee 00:14:51.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.647 Nvme0n1 : 2.00 18112.00 70.75 0.00 0.00 0.00 0.00 0.00 00:14:51.647 =================================================================================================================== 00:14:51.647 Total : 18112.00 70.75 0.00 0.00 0.00 0.00 0.00 00:14:51.647 00:14:51.647 true 00:14:51.647 21:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 504f0746-37f7-4292-bd4c-3ecbdb1252ee 00:14:51.647 21:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:51.647 21:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:51.647 21:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:51.647 21:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1900149 00:14:52.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.590 Nvme0n1 : 3.00 18132.33 70.83 0.00 0.00 0.00 0.00 0.00 00:14:52.590 =================================================================================================================== 00:14:52.590 Total : 18132.33 70.83 0.00 0.00 0.00 0.00 0.00 00:14:52.590 00:14:53.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.576 Nvme0n1 : 4.00 18142.75 70.87 0.00 0.00 0.00 0.00 0.00 00:14:53.576 =================================================================================================================== 00:14:53.576 Total : 18142.75 70.87 0.00 0.00 0.00 0.00 0.00 00:14:53.576 00:14:54.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.546 Nvme0n1 : 5.00 18162.00 70.95 0.00 0.00 0.00 0.00 0.00 00:14:54.546 =================================================================================================================== 00:14:54.546 Total : 18162.00 70.95 0.00 0.00 0.00 0.00 0.00 00:14:54.546 00:14:55.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.488 Nvme0n1 : 6.00 18185.50 71.04 0.00 0.00 0.00 0.00 0.00 00:14:55.488 =================================================================================================================== 00:14:55.488 Total : 18185.50 71.04 0.00 0.00 0.00 0.00 0.00 00:14:55.488 00:14:56.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.429 Nvme0n1 : 7.00 18210.86 71.14 0.00 0.00 0.00 0.00 0.00 00:14:56.429 =================================================================================================================== 00:14:56.429 Total : 18210.86 71.14 0.00 0.00 0.00 0.00 0.00 00:14:56.429 00:14:57.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.833 Nvme0n1 : 8.00 18222.38 71.18 0.00 0.00 0.00 0.00 0.00 00:14:57.833 =================================================================================================================== 00:14:57.833 Total : 18222.38 71.18 0.00 0.00 0.00 0.00 0.00 00:14:57.833 00:14:58.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.776 Nvme0n1 : 9.00 18231.33 71.22 0.00 0.00 0.00 0.00 0.00 00:14:58.776 =================================================================================================================== 00:14:58.776 Total : 18231.33 71.22 0.00 0.00 0.00 0.00 0.00 00:14:58.776 00:14:59.718 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.718 Nvme0n1 : 10.00 18244.40 71.27 0.00 0.00 0.00 0.00 0.00 00:14:59.718 =================================================================================================================== 00:14:59.718 Total : 18244.40 71.27 0.00 0.00 0.00 0.00 0.00 00:14:59.718 00:14:59.718 00:14:59.718 Latency(us) 00:14:59.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.718 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.718 Nvme0n1 : 10.00 18242.45 71.26 0.00 0.00 7014.47 2088.96 12451.84 00:14:59.718 =================================================================================================================== 00:14:59.718 Total : 18242.45 71.26 0.00 0.00 7014.47 2088.96 12451.84 00:14:59.718 0 00:14:59.718 21:05:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1899810 00:14:59.718 21:05:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1899810 ']' 00:14:59.718 21:05:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1899810 00:14:59.718 21:05:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:14:59.718 21:05:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:59.718 21:05:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1899810 00:14:59.718 21:05:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:59.718 21:05:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:59.718 21:05:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1899810' 00:14:59.718 killing process with pid 1899810 00:14:59.718 21:05:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1899810 00:14:59.718 Received shutdown signal, test time was about 10.000000 seconds 00:14:59.718 00:14:59.718 Latency(us) 00:14:59.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.718 =================================================================================================================== 00:14:59.718 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:59.718 21:05:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1899810 00:14:59.718 21:05:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:59.979 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:59.979 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 504f0746-37f7-4292-bd4c-3ecbdb1252ee 00:14:59.979 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:00.239 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:00.239 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:00.239 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:00.239 [2024-07-15 21:05:27.504374] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:00.499 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 504f0746-37f7-4292-bd4c-3ecbdb1252ee 00:15:00.499 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:15:00.499 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 504f0746-37f7-4292-bd4c-3ecbdb1252ee 00:15:00.499 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:00.499 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.499 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:00.499 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.499 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:00.499 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.499 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:00.499 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:00.499 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 504f0746-37f7-4292-bd4c-3ecbdb1252ee 00:15:00.499 request: 00:15:00.499 { 00:15:00.499 "uuid": "504f0746-37f7-4292-bd4c-3ecbdb1252ee", 00:15:00.499 "method": "bdev_lvol_get_lvstores", 00:15:00.499 "req_id": 1 00:15:00.499 } 00:15:00.499 Got JSON-RPC error response 00:15:00.499 response: 00:15:00.499 { 00:15:00.499 "code": -19, 00:15:00.499 "message": "No such device" 00:15:00.499 } 00:15:00.499 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:15:00.499 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:00.499 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:00.499 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:00.499 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:00.759 aio_bdev 00:15:00.759 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d7cbbaa4-df33-46c0-8587-4a64ae2793c7 00:15:00.759 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=d7cbbaa4-df33-46c0-8587-4a64ae2793c7 00:15:00.759 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:00.759 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:15:00.759 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:00.759 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:00.759 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:00.759 21:05:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d7cbbaa4-df33-46c0-8587-4a64ae2793c7 -t 2000 00:15:01.019 [ 00:15:01.019 { 00:15:01.019 "name": "d7cbbaa4-df33-46c0-8587-4a64ae2793c7", 00:15:01.019 "aliases": [ 00:15:01.019 "lvs/lvol" 00:15:01.019 ], 00:15:01.019 "product_name": "Logical Volume", 00:15:01.019 "block_size": 4096, 00:15:01.019 "num_blocks": 38912, 00:15:01.019 "uuid": "d7cbbaa4-df33-46c0-8587-4a64ae2793c7", 00:15:01.019 "assigned_rate_limits": { 00:15:01.019 "rw_ios_per_sec": 0, 00:15:01.019 "rw_mbytes_per_sec": 0, 00:15:01.019 "r_mbytes_per_sec": 0, 00:15:01.019 "w_mbytes_per_sec": 0 00:15:01.019 }, 00:15:01.019 "claimed": false, 00:15:01.019 "zoned": false, 00:15:01.019 "supported_io_types": { 00:15:01.019 "read": true, 00:15:01.019 "write": true, 00:15:01.019 "unmap": true, 00:15:01.019 "flush": false, 00:15:01.019 "reset": true, 00:15:01.019 "nvme_admin": false, 00:15:01.019 "nvme_io": false, 00:15:01.019 "nvme_io_md": false, 00:15:01.019 "write_zeroes": true, 00:15:01.019 "zcopy": false, 00:15:01.019 "get_zone_info": false, 00:15:01.019 "zone_management": false, 00:15:01.019 "zone_append": false, 00:15:01.019 "compare": false, 00:15:01.019 "compare_and_write": false, 00:15:01.019 "abort": false, 00:15:01.019 "seek_hole": true, 00:15:01.019 "seek_data": true, 00:15:01.019 "copy": false, 00:15:01.019 "nvme_iov_md": false 00:15:01.019 }, 00:15:01.019 "driver_specific": { 00:15:01.019 "lvol": { 00:15:01.019 "lvol_store_uuid": "504f0746-37f7-4292-bd4c-3ecbdb1252ee", 00:15:01.019 "base_bdev": "aio_bdev", 00:15:01.019 "thin_provision": false, 00:15:01.019 "num_allocated_clusters": 38, 00:15:01.019 "snapshot": false, 00:15:01.019 "clone": false, 00:15:01.019 "esnap_clone": false 00:15:01.019 } 00:15:01.019 } 00:15:01.019 } 00:15:01.019 ] 00:15:01.019 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:15:01.019 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 504f0746-37f7-4292-bd4c-3ecbdb1252ee 00:15:01.019 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:01.019 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:01.019 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 504f0746-37f7-4292-bd4c-3ecbdb1252ee 00:15:01.019 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:01.279 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:01.279 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d7cbbaa4-df33-46c0-8587-4a64ae2793c7 00:15:01.538 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 504f0746-37f7-4292-bd4c-3ecbdb1252ee 00:15:01.538 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:01.798 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:01.798 00:15:01.798 real 0m15.254s 00:15:01.798 user 0m15.012s 00:15:01.798 sys 0m1.247s 00:15:01.798 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:01.798 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:01.798 ************************************ 00:15:01.798 END TEST lvs_grow_clean 00:15:01.798 ************************************ 00:15:01.798 21:05:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:01.798 21:05:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:01.798 21:05:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:01.798 21:05:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:01.798 21:05:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:01.798 ************************************ 00:15:01.798 START TEST lvs_grow_dirty 00:15:01.798 ************************************ 00:15:01.798 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:15:01.798 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:01.798 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:01.798 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:01.798 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:01.798 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:01.798 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:01.798 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:01.798 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:01.798 21:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:02.058 21:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:02.058 21:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:02.058 21:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=eedb3824-652b-4c30-b29c-6519f7f3bf05 00:15:02.058 21:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eedb3824-652b-4c30-b29c-6519f7f3bf05 00:15:02.058 21:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:02.323 21:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:02.323 21:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:02.323 21:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u eedb3824-652b-4c30-b29c-6519f7f3bf05 lvol 150 00:15:02.584 21:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1029d455-f730-4899-b33c-9042a7c97f2e 00:15:02.584 21:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:02.584 21:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:02.584 [2024-07-15 21:05:29.792761] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:02.584 [2024-07-15 21:05:29.792817] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:02.584 true 00:15:02.584 21:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eedb3824-652b-4c30-b29c-6519f7f3bf05 00:15:02.584 21:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:02.845 21:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:02.845 21:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:02.845 21:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1029d455-f730-4899-b33c-9042a7c97f2e 00:15:03.106 21:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:03.366 [2024-07-15 21:05:30.414664] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:03.366 21:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:03.366 21:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1902903 00:15:03.366 21:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:03.366 21:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1902903 /var/tmp/bdevperf.sock 00:15:03.366 21:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:03.366 21:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1902903 ']' 00:15:03.366 21:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:03.366 21:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:03.366 21:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:03.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:03.366 21:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:03.366 21:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:03.366 [2024-07-15 21:05:30.644143] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:15:03.366 [2024-07-15 21:05:30.644195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1902903 ] 00:15:03.625 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.625 [2024-07-15 21:05:30.726392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.625 [2024-07-15 21:05:30.780373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.196 21:05:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:04.196 21:05:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:04.196 21:05:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:04.457 Nvme0n1 00:15:04.457 21:05:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:04.717 [ 00:15:04.717 { 00:15:04.717 "name": "Nvme0n1", 00:15:04.717 "aliases": [ 00:15:04.717 "1029d455-f730-4899-b33c-9042a7c97f2e" 00:15:04.717 ], 00:15:04.717 "product_name": "NVMe disk", 00:15:04.717 "block_size": 4096, 00:15:04.717 "num_blocks": 38912, 00:15:04.717 "uuid": "1029d455-f730-4899-b33c-9042a7c97f2e", 00:15:04.717 "assigned_rate_limits": { 00:15:04.717 "rw_ios_per_sec": 0, 00:15:04.717 "rw_mbytes_per_sec": 0, 00:15:04.717 "r_mbytes_per_sec": 0, 00:15:04.717 "w_mbytes_per_sec": 0 00:15:04.717 }, 00:15:04.717 "claimed": false, 00:15:04.717 "zoned": false, 00:15:04.717 "supported_io_types": { 00:15:04.717 "read": true, 00:15:04.717 "write": true, 00:15:04.717 "unmap": true, 00:15:04.717 "flush": true, 00:15:04.717 "reset": true, 00:15:04.717 "nvme_admin": true, 00:15:04.717 "nvme_io": true, 00:15:04.717 "nvme_io_md": false, 00:15:04.718 "write_zeroes": true, 00:15:04.718 "zcopy": false, 00:15:04.718 "get_zone_info": false, 00:15:04.718 "zone_management": false, 00:15:04.718 "zone_append": false, 00:15:04.718 "compare": true, 00:15:04.718 "compare_and_write": true, 00:15:04.718 "abort": true, 00:15:04.718 "seek_hole": false, 00:15:04.718 "seek_data": false, 00:15:04.718 "copy": true, 00:15:04.718 "nvme_iov_md": false 00:15:04.718 }, 00:15:04.718 "memory_domains": [ 00:15:04.718 { 00:15:04.718 "dma_device_id": "system", 00:15:04.718 "dma_device_type": 1 00:15:04.718 } 00:15:04.718 ], 00:15:04.718 "driver_specific": { 00:15:04.718 "nvme": [ 00:15:04.718 { 00:15:04.718 "trid": { 00:15:04.718 "trtype": "TCP", 00:15:04.718 "adrfam": "IPv4", 00:15:04.718 "traddr": "10.0.0.2", 00:15:04.718 "trsvcid": "4420", 00:15:04.718 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:04.718 }, 00:15:04.718 "ctrlr_data": { 00:15:04.718 "cntlid": 1, 00:15:04.718 "vendor_id": "0x8086", 00:15:04.718 "model_number": "SPDK bdev Controller", 00:15:04.718 "serial_number": "SPDK0", 00:15:04.718 "firmware_revision": "24.09", 00:15:04.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:04.718 "oacs": { 00:15:04.718 "security": 0, 00:15:04.718 "format": 0, 00:15:04.718 "firmware": 0, 00:15:04.718 "ns_manage": 0 00:15:04.718 }, 00:15:04.718 "multi_ctrlr": true, 00:15:04.718 "ana_reporting": false 00:15:04.718 }, 00:15:04.718 "vs": { 00:15:04.718 "nvme_version": "1.3" 00:15:04.718 }, 00:15:04.718 "ns_data": { 00:15:04.718 "id": 1, 00:15:04.718 "can_share": true 00:15:04.718 } 00:15:04.718 } 00:15:04.718 ], 00:15:04.718 "mp_policy": "active_passive" 00:15:04.718 } 00:15:04.718 } 00:15:04.718 ] 00:15:04.718 21:05:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1903243 00:15:04.718 21:05:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:04.718 21:05:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:04.718 Running I/O for 10 seconds... 00:15:06.101 Latency(us) 00:15:06.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.101 Nvme0n1 : 1.00 17989.00 70.27 0.00 0.00 0.00 0.00 0.00 00:15:06.101 =================================================================================================================== 00:15:06.101 Total : 17989.00 70.27 0.00 0.00 0.00 0.00 0.00 00:15:06.101 00:15:06.670 21:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u eedb3824-652b-4c30-b29c-6519f7f3bf05 00:15:06.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.670 Nvme0n1 : 2.00 18049.50 70.51 0.00 0.00 0.00 0.00 0.00 00:15:06.670 =================================================================================================================== 00:15:06.670 Total : 18049.50 70.51 0.00 0.00 0.00 0.00 0.00 00:15:06.670 00:15:06.931 true 00:15:06.931 21:05:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eedb3824-652b-4c30-b29c-6519f7f3bf05 00:15:06.931 21:05:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:06.931 21:05:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:06.931 21:05:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:06.931 21:05:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1903243 00:15:07.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.894 Nvme0n1 : 3.00 18091.00 70.67 0.00 0.00 0.00 0.00 0.00 00:15:07.894 =================================================================================================================== 00:15:07.894 Total : 18091.00 70.67 0.00 0.00 0.00 0.00 0.00 00:15:07.894 00:15:08.835 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.835 Nvme0n1 : 4.00 18128.25 70.81 0.00 0.00 0.00 0.00 0.00 00:15:08.835 =================================================================================================================== 00:15:08.835 Total : 18128.25 70.81 0.00 0.00 0.00 0.00 0.00 00:15:08.835 00:15:09.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.778 Nvme0n1 : 5.00 18150.00 70.90 0.00 0.00 0.00 0.00 0.00 00:15:09.778 =================================================================================================================== 00:15:09.778 Total : 18150.00 70.90 0.00 0.00 0.00 0.00 0.00 00:15:09.778 00:15:10.717 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.717 Nvme0n1 : 6.00 18175.17 71.00 0.00 0.00 0.00 0.00 0.00 00:15:10.717 =================================================================================================================== 00:15:10.717 Total : 18175.17 71.00 0.00 0.00 0.00 0.00 0.00 00:15:10.717 00:15:12.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:12.097 Nvme0n1 : 7.00 18193.57 71.07 0.00 0.00 0.00 0.00 0.00 00:15:12.097 =================================================================================================================== 00:15:12.097 Total : 18193.57 71.07 0.00 0.00 0.00 0.00 0.00 00:15:12.097 00:15:13.060 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:13.060 Nvme0n1 : 8.00 18207.38 71.12 0.00 0.00 0.00 0.00 0.00 00:15:13.060 =================================================================================================================== 00:15:13.061 Total : 18207.38 71.12 0.00 0.00 0.00 0.00 0.00 00:15:13.061 00:15:14.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:14.001 Nvme0n1 : 9.00 18210.33 71.13 0.00 0.00 0.00 0.00 0.00 00:15:14.001 =================================================================================================================== 00:15:14.001 Total : 18210.33 71.13 0.00 0.00 0.00 0.00 0.00 00:15:14.001 00:15:14.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:14.941 Nvme0n1 : 10.00 18219.30 71.17 0.00 0.00 0.00 0.00 0.00 00:15:14.941 =================================================================================================================== 00:15:14.941 Total : 18219.30 71.17 0.00 0.00 0.00 0.00 0.00 00:15:14.941 00:15:14.941 00:15:14.941 Latency(us) 00:15:14.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:14.941 Nvme0n1 : 10.01 18221.99 71.18 0.00 0.00 7023.36 4341.76 12615.68 00:15:14.941 =================================================================================================================== 00:15:14.941 Total : 18221.99 71.18 0.00 0.00 7023.36 4341.76 12615.68 00:15:14.941 0 00:15:14.941 21:05:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1902903 00:15:14.941 21:05:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1902903 ']' 00:15:14.941 21:05:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1902903 00:15:14.941 21:05:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:15:14.941 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:14.941 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1902903 00:15:14.941 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:14.941 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:14.941 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1902903' 00:15:14.941 killing process with pid 1902903 00:15:14.941 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1902903 00:15:14.941 Received shutdown signal, test time was about 10.000000 seconds 00:15:14.941 00:15:14.941 Latency(us) 00:15:14.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.941 =================================================================================================================== 00:15:14.941 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:14.941 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1902903 00:15:14.941 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:15.201 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:15.201 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eedb3824-652b-4c30-b29c-6519f7f3bf05 00:15:15.201 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:15.461 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:15.461 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:15.461 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1899236 00:15:15.461 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1899236 00:15:15.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1899236 Killed "${NVMF_APP[@]}" "$@" 00:15:15.461 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:15.461 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:15.461 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:15.461 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:15.461 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:15.461 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1905262 00:15:15.461 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1905262 00:15:15.461 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:15.461 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1905262 ']' 00:15:15.461 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.461 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:15.461 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.461 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:15.461 21:05:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:15.461 [2024-07-15 21:05:42.740136] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:15:15.461 [2024-07-15 21:05:42.740191] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.722 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.722 [2024-07-15 21:05:42.813471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.722 [2024-07-15 21:05:42.878899] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.722 [2024-07-15 21:05:42.878936] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.722 [2024-07-15 21:05:42.878944] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.722 [2024-07-15 21:05:42.878950] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.722 [2024-07-15 21:05:42.878956] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.722 [2024-07-15 21:05:42.878973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.294 21:05:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:16.294 21:05:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:16.294 21:05:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:16.294 21:05:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:16.294 21:05:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:16.294 21:05:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:16.294 21:05:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:16.554 [2024-07-15 21:05:43.671875] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:16.554 [2024-07-15 21:05:43.671967] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:16.554 [2024-07-15 21:05:43.671996] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:16.554 21:05:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:16.554 21:05:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1029d455-f730-4899-b33c-9042a7c97f2e 00:15:16.554 21:05:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=1029d455-f730-4899-b33c-9042a7c97f2e 00:15:16.554 21:05:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:16.554 21:05:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:16.554 21:05:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:16.554 21:05:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:16.554 21:05:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:16.814 21:05:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1029d455-f730-4899-b33c-9042a7c97f2e -t 2000 00:15:16.814 [ 00:15:16.814 { 00:15:16.814 "name": "1029d455-f730-4899-b33c-9042a7c97f2e", 00:15:16.814 "aliases": [ 00:15:16.814 "lvs/lvol" 00:15:16.814 ], 00:15:16.814 "product_name": "Logical Volume", 00:15:16.814 "block_size": 4096, 00:15:16.814 "num_blocks": 38912, 00:15:16.814 "uuid": "1029d455-f730-4899-b33c-9042a7c97f2e", 00:15:16.814 "assigned_rate_limits": { 00:15:16.814 "rw_ios_per_sec": 0, 00:15:16.814 "rw_mbytes_per_sec": 0, 00:15:16.814 "r_mbytes_per_sec": 0, 00:15:16.814 "w_mbytes_per_sec": 0 00:15:16.814 }, 00:15:16.814 "claimed": false, 00:15:16.814 "zoned": false, 00:15:16.814 "supported_io_types": { 00:15:16.814 "read": true, 00:15:16.814 "write": true, 00:15:16.814 "unmap": true, 00:15:16.814 "flush": false, 00:15:16.814 "reset": true, 00:15:16.814 "nvme_admin": false, 00:15:16.814 "nvme_io": false, 00:15:16.814 "nvme_io_md": false, 00:15:16.814 "write_zeroes": true, 00:15:16.814 "zcopy": false, 00:15:16.814 "get_zone_info": false, 00:15:16.814 "zone_management": false, 00:15:16.814 "zone_append": false, 00:15:16.814 "compare": false, 00:15:16.814 "compare_and_write": false, 00:15:16.814 "abort": false, 00:15:16.814 "seek_hole": true, 00:15:16.814 "seek_data": true, 00:15:16.814 "copy": false, 00:15:16.814 "nvme_iov_md": false 00:15:16.814 }, 00:15:16.814 "driver_specific": { 00:15:16.814 "lvol": { 00:15:16.814 "lvol_store_uuid": "eedb3824-652b-4c30-b29c-6519f7f3bf05", 00:15:16.814 "base_bdev": "aio_bdev", 00:15:16.814 "thin_provision": false, 00:15:16.814 "num_allocated_clusters": 38, 00:15:16.814 "snapshot": false, 00:15:16.814 "clone": false, 00:15:16.814 "esnap_clone": false 00:15:16.814 } 00:15:16.814 } 00:15:16.814 } 00:15:16.814 ] 00:15:16.814 21:05:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:16.814 21:05:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eedb3824-652b-4c30-b29c-6519f7f3bf05 00:15:16.814 21:05:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:17.074 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:17.074 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eedb3824-652b-4c30-b29c-6519f7f3bf05 00:15:17.074 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:17.074 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:17.074 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:17.334 [2024-07-15 21:05:44.451813] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:17.334 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eedb3824-652b-4c30-b29c-6519f7f3bf05 00:15:17.334 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:17.334 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eedb3824-652b-4c30-b29c-6519f7f3bf05 00:15:17.334 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.334 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:17.334 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.334 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:17.334 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.334 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:17.334 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.334 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:17.334 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eedb3824-652b-4c30-b29c-6519f7f3bf05 00:15:17.595 request: 00:15:17.595 { 00:15:17.595 "uuid": "eedb3824-652b-4c30-b29c-6519f7f3bf05", 00:15:17.595 "method": "bdev_lvol_get_lvstores", 00:15:17.595 "req_id": 1 00:15:17.595 } 00:15:17.595 Got JSON-RPC error response 00:15:17.595 response: 00:15:17.595 { 00:15:17.595 "code": -19, 00:15:17.595 "message": "No such device" 00:15:17.595 } 00:15:17.595 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:17.595 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:17.595 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:17.595 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:17.595 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:17.595 aio_bdev 00:15:17.595 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1029d455-f730-4899-b33c-9042a7c97f2e 00:15:17.595 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=1029d455-f730-4899-b33c-9042a7c97f2e 00:15:17.595 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:17.595 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:17.595 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:17.595 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:17.595 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:17.856 21:05:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1029d455-f730-4899-b33c-9042a7c97f2e -t 2000 00:15:17.856 [ 00:15:17.856 { 00:15:17.856 "name": "1029d455-f730-4899-b33c-9042a7c97f2e", 00:15:17.856 "aliases": [ 00:15:17.856 "lvs/lvol" 00:15:17.856 ], 00:15:17.856 "product_name": "Logical Volume", 00:15:17.856 "block_size": 4096, 00:15:17.856 "num_blocks": 38912, 00:15:17.856 "uuid": "1029d455-f730-4899-b33c-9042a7c97f2e", 00:15:17.856 "assigned_rate_limits": { 00:15:17.856 "rw_ios_per_sec": 0, 00:15:17.856 "rw_mbytes_per_sec": 0, 00:15:17.856 "r_mbytes_per_sec": 0, 00:15:17.856 "w_mbytes_per_sec": 0 00:15:17.856 }, 00:15:17.856 "claimed": false, 00:15:17.856 "zoned": false, 00:15:17.856 "supported_io_types": { 00:15:17.856 "read": true, 00:15:17.856 "write": true, 00:15:17.856 "unmap": true, 00:15:17.856 "flush": false, 00:15:17.856 "reset": true, 00:15:17.856 "nvme_admin": false, 00:15:17.856 "nvme_io": false, 00:15:17.856 "nvme_io_md": false, 00:15:17.856 "write_zeroes": true, 00:15:17.856 "zcopy": false, 00:15:17.856 "get_zone_info": false, 00:15:17.856 "zone_management": false, 00:15:17.856 "zone_append": false, 00:15:17.856 "compare": false, 00:15:17.856 "compare_and_write": false, 00:15:17.856 "abort": false, 00:15:17.856 "seek_hole": true, 00:15:17.856 "seek_data": true, 00:15:17.856 "copy": false, 00:15:17.856 "nvme_iov_md": false 00:15:17.856 }, 00:15:17.856 "driver_specific": { 00:15:17.856 "lvol": { 00:15:17.856 "lvol_store_uuid": "eedb3824-652b-4c30-b29c-6519f7f3bf05", 00:15:17.856 "base_bdev": "aio_bdev", 00:15:17.856 "thin_provision": false, 00:15:17.856 "num_allocated_clusters": 38, 00:15:17.856 "snapshot": false, 00:15:17.856 "clone": false, 00:15:17.856 "esnap_clone": false 00:15:17.856 } 00:15:17.856 } 00:15:17.856 } 00:15:17.856 ] 00:15:17.856 21:05:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:17.856 21:05:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eedb3824-652b-4c30-b29c-6519f7f3bf05 00:15:17.856 21:05:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:18.117 21:05:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:18.117 21:05:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eedb3824-652b-4c30-b29c-6519f7f3bf05 00:15:18.117 21:05:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:18.117 21:05:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:18.117 21:05:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1029d455-f730-4899-b33c-9042a7c97f2e 00:15:18.379 21:05:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eedb3824-652b-4c30-b29c-6519f7f3bf05 00:15:18.640 21:05:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:18.640 21:05:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:18.640 00:15:18.640 real 0m16.887s 00:15:18.640 user 0m44.366s 00:15:18.640 sys 0m2.895s 00:15:18.640 21:05:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:18.640 21:05:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:18.640 ************************************ 00:15:18.640 END TEST lvs_grow_dirty 00:15:18.640 ************************************ 00:15:18.640 21:05:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:18.640 21:05:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:18.640 21:05:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:18.640 21:05:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:18.640 21:05:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:18.641 21:05:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:18.641 21:05:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:18.641 21:05:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:18.641 21:05:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:18.641 21:05:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:18.641 nvmf_trace.0 00:15:18.902 21:05:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:18.902 21:05:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:18.902 21:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:18.902 21:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:18.902 21:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:18.902 21:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:18.902 21:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:18.902 21:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:18.902 rmmod nvme_tcp 00:15:18.902 rmmod nvme_fabrics 00:15:18.902 rmmod nvme_keyring 00:15:18.902 21:05:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:18.902 21:05:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:18.902 21:05:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:18.902 21:05:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1905262 ']' 00:15:18.902 21:05:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1905262 00:15:18.902 21:05:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1905262 ']' 00:15:18.902 21:05:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1905262 00:15:18.902 21:05:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:18.902 21:05:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:18.902 21:05:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1905262 00:15:18.902 21:05:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:18.902 21:05:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:18.902 21:05:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1905262' 00:15:18.902 killing process with pid 1905262 00:15:18.902 21:05:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1905262 00:15:18.902 21:05:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1905262 00:15:19.164 21:05:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:19.164 21:05:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:19.164 21:05:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:19.164 21:05:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.164 21:05:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:19.164 21:05:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.164 21:05:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.164 21:05:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.079 21:05:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:21.079 00:15:21.079 real 0m43.916s 00:15:21.079 user 1m5.520s 00:15:21.079 sys 0m10.539s 00:15:21.079 21:05:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.079 21:05:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:21.079 ************************************ 00:15:21.079 END TEST nvmf_lvs_grow 00:15:21.079 ************************************ 00:15:21.079 21:05:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:21.079 21:05:48 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:21.079 21:05:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:21.079 21:05:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.079 21:05:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:21.079 ************************************ 00:15:21.079 START TEST nvmf_bdev_io_wait 00:15:21.079 ************************************ 00:15:21.079 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:21.341 * Looking for test storage... 00:15:21.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:21.341 21:05:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:29.543 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.543 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:29.543 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:29.544 Found net devices under 0000:31:00.0: cvl_0_0 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:29.544 Found net devices under 0000:31:00.1: cvl_0_1 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:29.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:29.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.761 ms 00:15:29.544 00:15:29.544 --- 10.0.0.2 ping statistics --- 00:15:29.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.544 rtt min/avg/max/mdev = 0.761/0.761/0.761/0.000 ms 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:29.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:29.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:15:29.544 00:15:29.544 --- 10.0.0.1 ping statistics --- 00:15:29.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.544 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1910682 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1910682 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1910682 ']' 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.544 21:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:29.544 [2024-07-15 21:05:56.803157] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:15:29.544 [2024-07-15 21:05:56.803256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.803 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.803 [2024-07-15 21:05:56.884602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:29.803 [2024-07-15 21:05:56.959763] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.803 [2024-07-15 21:05:56.959801] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.803 [2024-07-15 21:05:56.959809] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.803 [2024-07-15 21:05:56.959816] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.803 [2024-07-15 21:05:56.959821] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.803 [2024-07-15 21:05:56.959966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.803 [2024-07-15 21:05:56.960082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.803 [2024-07-15 21:05:56.960257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:29.803 [2024-07-15 21:05:56.960261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.374 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.374 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:30.374 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:30.374 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:30.374 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:30.374 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.374 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:30.374 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.374 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:30.374 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.374 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:30.374 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.374 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:30.639 [2024-07-15 21:05:57.684131] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:30.639 Malloc0 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:30.639 [2024-07-15 21:05:57.753501] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1911010 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1911013 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:30.639 { 00:15:30.639 "params": { 00:15:30.639 "name": "Nvme$subsystem", 00:15:30.639 "trtype": "$TEST_TRANSPORT", 00:15:30.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:30.639 "adrfam": "ipv4", 00:15:30.639 "trsvcid": "$NVMF_PORT", 00:15:30.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:30.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:30.639 "hdgst": ${hdgst:-false}, 00:15:30.639 "ddgst": ${ddgst:-false} 00:15:30.639 }, 00:15:30.639 "method": "bdev_nvme_attach_controller" 00:15:30.639 } 00:15:30.639 EOF 00:15:30.639 )") 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1911016 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1911018 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:30.639 { 00:15:30.639 "params": { 00:15:30.639 "name": "Nvme$subsystem", 00:15:30.639 "trtype": "$TEST_TRANSPORT", 00:15:30.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:30.639 "adrfam": "ipv4", 00:15:30.639 "trsvcid": "$NVMF_PORT", 00:15:30.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:30.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:30.639 "hdgst": ${hdgst:-false}, 00:15:30.639 "ddgst": ${ddgst:-false} 00:15:30.639 }, 00:15:30.639 "method": "bdev_nvme_attach_controller" 00:15:30.639 } 00:15:30.639 EOF 00:15:30.639 )") 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:30.639 { 00:15:30.639 "params": { 00:15:30.639 "name": "Nvme$subsystem", 00:15:30.639 "trtype": "$TEST_TRANSPORT", 00:15:30.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:30.639 "adrfam": "ipv4", 00:15:30.639 "trsvcid": "$NVMF_PORT", 00:15:30.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:30.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:30.639 "hdgst": ${hdgst:-false}, 00:15:30.639 "ddgst": ${ddgst:-false} 00:15:30.639 }, 00:15:30.639 "method": "bdev_nvme_attach_controller" 00:15:30.639 } 00:15:30.639 EOF 00:15:30.639 )") 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:30.639 { 00:15:30.639 "params": { 00:15:30.639 "name": "Nvme$subsystem", 00:15:30.639 "trtype": "$TEST_TRANSPORT", 00:15:30.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:30.639 "adrfam": "ipv4", 00:15:30.639 "trsvcid": "$NVMF_PORT", 00:15:30.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:30.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:30.639 "hdgst": ${hdgst:-false}, 00:15:30.639 "ddgst": ${ddgst:-false} 00:15:30.639 }, 00:15:30.639 "method": "bdev_nvme_attach_controller" 00:15:30.639 } 00:15:30.639 EOF 00:15:30.639 )") 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1911010 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:30.639 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:30.639 "params": { 00:15:30.639 "name": "Nvme1", 00:15:30.639 "trtype": "tcp", 00:15:30.639 "traddr": "10.0.0.2", 00:15:30.639 "adrfam": "ipv4", 00:15:30.639 "trsvcid": "4420", 00:15:30.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:30.639 "hdgst": false, 00:15:30.639 "ddgst": false 00:15:30.639 }, 00:15:30.639 "method": "bdev_nvme_attach_controller" 00:15:30.639 }' 00:15:30.640 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:30.640 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:30.640 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:30.640 "params": { 00:15:30.640 "name": "Nvme1", 00:15:30.640 "trtype": "tcp", 00:15:30.640 "traddr": "10.0.0.2", 00:15:30.640 "adrfam": "ipv4", 00:15:30.640 "trsvcid": "4420", 00:15:30.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.640 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:30.640 "hdgst": false, 00:15:30.640 "ddgst": false 00:15:30.640 }, 00:15:30.640 "method": "bdev_nvme_attach_controller" 00:15:30.640 }' 00:15:30.640 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:30.640 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:30.640 "params": { 00:15:30.640 "name": "Nvme1", 00:15:30.640 "trtype": "tcp", 00:15:30.640 "traddr": "10.0.0.2", 00:15:30.640 "adrfam": "ipv4", 00:15:30.640 "trsvcid": "4420", 00:15:30.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.640 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:30.640 "hdgst": false, 00:15:30.640 "ddgst": false 00:15:30.640 }, 00:15:30.640 "method": "bdev_nvme_attach_controller" 00:15:30.640 }' 00:15:30.640 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:30.640 21:05:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:30.640 "params": { 00:15:30.640 "name": "Nvme1", 00:15:30.640 "trtype": "tcp", 00:15:30.640 "traddr": "10.0.0.2", 00:15:30.640 "adrfam": "ipv4", 00:15:30.640 "trsvcid": "4420", 00:15:30.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.640 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:30.640 "hdgst": false, 00:15:30.640 "ddgst": false 00:15:30.640 }, 00:15:30.640 "method": "bdev_nvme_attach_controller" 00:15:30.640 }' 00:15:30.640 [2024-07-15 21:05:57.808545] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:15:30.640 [2024-07-15 21:05:57.808594] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:30.640 [2024-07-15 21:05:57.808733] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:15:30.640 [2024-07-15 21:05:57.808778] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:30.640 [2024-07-15 21:05:57.809786] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:15:30.640 [2024-07-15 21:05:57.809843] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:30.640 [2024-07-15 21:05:57.810400] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:15:30.640 [2024-07-15 21:05:57.810443] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:30.640 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.640 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.901 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.901 [2024-07-15 21:05:57.952676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.901 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.901 [2024-07-15 21:05:57.993931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.901 [2024-07-15 21:05:58.003793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:30.901 [2024-07-15 21:05:58.042813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.901 [2024-07-15 21:05:58.043925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:30.901 [2024-07-15 21:05:58.093810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:30.901 [2024-07-15 21:05:58.102224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.901 [2024-07-15 21:05:58.152648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:31.161 Running I/O for 1 seconds... 00:15:31.161 Running I/O for 1 seconds... 00:15:31.161 Running I/O for 1 seconds... 00:15:31.161 Running I/O for 1 seconds... 00:15:32.102 00:15:32.102 Latency(us) 00:15:32.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.102 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:32.102 Nvme1n1 : 1.01 11135.71 43.50 0.00 0.00 11428.94 3904.85 16930.13 00:15:32.102 =================================================================================================================== 00:15:32.102 Total : 11135.71 43.50 0.00 0.00 11428.94 3904.85 16930.13 00:15:32.102 00:15:32.102 Latency(us) 00:15:32.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.102 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:32.102 Nvme1n1 : 1.00 186864.65 729.94 0.00 0.00 682.53 276.48 781.65 00:15:32.102 =================================================================================================================== 00:15:32.102 Total : 186864.65 729.94 0.00 0.00 682.53 276.48 781.65 00:15:32.102 00:15:32.102 Latency(us) 00:15:32.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.102 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:32.102 Nvme1n1 : 1.00 11219.84 43.83 0.00 0.00 11384.33 3358.72 26432.85 00:15:32.102 =================================================================================================================== 00:15:32.102 Total : 11219.84 43.83 0.00 0.00 11384.33 3358.72 26432.85 00:15:32.102 00:15:32.102 Latency(us) 00:15:32.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.102 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:32.102 Nvme1n1 : 1.00 14252.64 55.67 0.00 0.00 8955.27 4805.97 17694.72 00:15:32.102 =================================================================================================================== 00:15:32.102 Total : 14252.64 55.67 0.00 0.00 8955.27 4805.97 17694.72 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1911013 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1911016 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1911018 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:32.363 rmmod nvme_tcp 00:15:32.363 rmmod nvme_fabrics 00:15:32.363 rmmod nvme_keyring 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1910682 ']' 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1910682 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1910682 ']' 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1910682 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1910682 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1910682' 00:15:32.363 killing process with pid 1910682 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1910682 00:15:32.363 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1910682 00:15:32.623 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:32.623 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:32.623 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:32.623 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:32.623 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:32.623 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.623 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.623 21:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.168 21:06:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:35.168 00:15:35.168 real 0m13.479s 00:15:35.168 user 0m18.895s 00:15:35.168 sys 0m7.516s 00:15:35.168 21:06:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:35.168 21:06:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:35.168 ************************************ 00:15:35.168 END TEST nvmf_bdev_io_wait 00:15:35.168 ************************************ 00:15:35.168 21:06:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:35.168 21:06:01 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:35.168 21:06:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:35.168 21:06:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:35.168 21:06:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:35.168 ************************************ 00:15:35.168 START TEST nvmf_queue_depth 00:15:35.168 ************************************ 00:15:35.168 21:06:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:35.168 * Looking for test storage... 00:15:35.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.168 21:06:02 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:35.169 21:06:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:43.310 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:43.310 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:43.310 Found net devices under 0000:31:00.0: cvl_0_0 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:43.310 Found net devices under 0000:31:00.1: cvl_0_1 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:43.310 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:43.311 21:06:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:43.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:15:43.311 00:15:43.311 --- 10.0.0.2 ping statistics --- 00:15:43.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.311 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:43.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:15:43.311 00:15:43.311 --- 10.0.0.1 ping statistics --- 00:15:43.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.311 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1916594 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1916594 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1916594 ']' 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:43.311 21:06:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:43.311 [2024-07-15 21:06:10.381475] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:15:43.311 [2024-07-15 21:06:10.381546] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.311 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.311 [2024-07-15 21:06:10.476427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.311 [2024-07-15 21:06:10.571604] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.311 [2024-07-15 21:06:10.571659] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.311 [2024-07-15 21:06:10.571669] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.311 [2024-07-15 21:06:10.571676] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.311 [2024-07-15 21:06:10.571681] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.311 [2024-07-15 21:06:10.571706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.879 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:43.879 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:43.879 21:06:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:43.879 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:43.879 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:44.140 [2024-07-15 21:06:11.211283] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:44.140 Malloc0 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:44.140 [2024-07-15 21:06:11.291243] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1916693 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1916693 /var/tmp/bdevperf.sock 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1916693 ']' 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:44.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:44.140 21:06:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:44.140 [2024-07-15 21:06:11.348414] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:15:44.140 [2024-07-15 21:06:11.348477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1916693 ] 00:15:44.140 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.140 [2024-07-15 21:06:11.418978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.400 [2024-07-15 21:06:11.493123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.969 21:06:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:44.969 21:06:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:44.969 21:06:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:44.969 21:06:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.969 21:06:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:44.969 NVMe0n1 00:15:44.970 21:06:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.970 21:06:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:45.229 Running I/O for 10 seconds... 00:15:55.218 00:15:55.218 Latency(us) 00:15:55.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.218 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:55.218 Verification LBA range: start 0x0 length 0x4000 00:15:55.218 NVMe0n1 : 10.06 11465.13 44.79 0.00 0.00 88966.87 23156.05 79080.11 00:15:55.218 =================================================================================================================== 00:15:55.218 Total : 11465.13 44.79 0.00 0.00 88966.87 23156.05 79080.11 00:15:55.218 0 00:15:55.218 21:06:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1916693 00:15:55.218 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1916693 ']' 00:15:55.218 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1916693 00:15:55.218 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:55.218 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:55.218 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1916693 00:15:55.218 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:55.218 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:55.218 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1916693' 00:15:55.218 killing process with pid 1916693 00:15:55.218 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1916693 00:15:55.218 Received shutdown signal, test time was about 10.000000 seconds 00:15:55.218 00:15:55.218 Latency(us) 00:15:55.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.218 =================================================================================================================== 00:15:55.218 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:55.218 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1916693 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:55.479 rmmod nvme_tcp 00:15:55.479 rmmod nvme_fabrics 00:15:55.479 rmmod nvme_keyring 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1916594 ']' 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1916594 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1916594 ']' 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1916594 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1916594 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1916594' 00:15:55.479 killing process with pid 1916594 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1916594 00:15:55.479 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1916594 00:15:55.739 21:06:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:55.739 21:06:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:55.739 21:06:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:55.739 21:06:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:55.739 21:06:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:55.739 21:06:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.739 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.739 21:06:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.649 21:06:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:57.649 00:15:57.649 real 0m22.979s 00:15:57.649 user 0m25.726s 00:15:57.649 sys 0m7.303s 00:15:57.649 21:06:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:57.649 21:06:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:57.649 ************************************ 00:15:57.649 END TEST nvmf_queue_depth 00:15:57.649 ************************************ 00:15:57.649 21:06:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:57.649 21:06:24 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:57.649 21:06:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:57.649 21:06:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.649 21:06:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:57.910 ************************************ 00:15:57.910 START TEST nvmf_target_multipath 00:15:57.910 ************************************ 00:15:57.910 21:06:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:57.910 * Looking for test storage... 00:15:57.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:57.910 21:06:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:57.910 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:57.910 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.910 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.910 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.910 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.910 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.910 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.910 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:57.911 21:06:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:06.049 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:06.049 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:06.049 Found net devices under 0000:31:00.0: cvl_0_0 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:06.049 Found net devices under 0000:31:00.1: cvl_0_1 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:06.049 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:06.050 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:06.050 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:06.050 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.050 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.050 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:06.050 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:06.050 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:06.050 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:06.050 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:06.050 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:06.050 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.050 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:06.050 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:06.050 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:06.050 21:06:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:06.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:16:06.050 00:16:06.050 --- 10.0.0.2 ping statistics --- 00:16:06.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.050 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:06.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:16:06.050 00:16:06.050 --- 10.0.0.1 ping statistics --- 00:16:06.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.050 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:06.050 only one NIC for nvmf test 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:06.050 rmmod nvme_tcp 00:16:06.050 rmmod nvme_fabrics 00:16:06.050 rmmod nvme_keyring 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.050 21:06:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.590 21:06:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:08.590 21:06:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:08.590 21:06:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:08.590 21:06:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:08.590 21:06:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:08.590 21:06:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:08.590 21:06:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:08.590 21:06:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:08.590 21:06:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:08.591 21:06:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:08.591 21:06:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:08.591 21:06:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:08.591 21:06:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:08.591 21:06:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:08.591 21:06:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:08.591 21:06:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:08.591 21:06:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:08.591 21:06:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:08.591 21:06:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.591 21:06:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.591 21:06:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.591 21:06:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:08.591 00:16:08.591 real 0m10.424s 00:16:08.591 user 0m2.284s 00:16:08.591 sys 0m6.006s 00:16:08.591 21:06:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:08.591 21:06:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:08.591 ************************************ 00:16:08.591 END TEST nvmf_target_multipath 00:16:08.591 ************************************ 00:16:08.591 21:06:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:08.591 21:06:35 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:08.591 21:06:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:08.591 21:06:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:08.591 21:06:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:08.591 ************************************ 00:16:08.591 START TEST nvmf_zcopy 00:16:08.591 ************************************ 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:08.591 * Looking for test storage... 00:16:08.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:08.591 21:06:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:16.726 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:16.726 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:16.726 Found net devices under 0000:31:00.0: cvl_0_0 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:16.726 Found net devices under 0000:31:00.1: cvl_0_1 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:16.726 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:16.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:16:16.727 00:16:16.727 --- 10.0.0.2 ping statistics --- 00:16:16.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.727 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:16.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:16:16.727 00:16:16.727 --- 10.0.0.1 ping statistics --- 00:16:16.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.727 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1928334 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1928334 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1928334 ']' 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:16.727 21:06:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:16.727 [2024-07-15 21:06:43.729446] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:16:16.727 [2024-07-15 21:06:43.729493] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.727 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.727 [2024-07-15 21:06:43.821378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.727 [2024-07-15 21:06:43.912503] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.727 [2024-07-15 21:06:43.912586] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.727 [2024-07-15 21:06:43.912595] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:16.727 [2024-07-15 21:06:43.912602] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:16.727 [2024-07-15 21:06:43.912608] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.727 [2024-07-15 21:06:43.912645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:17.300 [2024-07-15 21:06:44.551933] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:17.300 [2024-07-15 21:06:44.576182] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.300 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:17.562 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.562 21:06:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:17.562 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.562 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:17.562 malloc0 00:16:17.562 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.562 21:06:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:17.562 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.562 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:17.562 21:06:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.562 21:06:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:17.562 21:06:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:17.562 21:06:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:17.562 21:06:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:17.562 21:06:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:17.562 21:06:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:17.562 { 00:16:17.562 "params": { 00:16:17.562 "name": "Nvme$subsystem", 00:16:17.562 "trtype": "$TEST_TRANSPORT", 00:16:17.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:17.562 "adrfam": "ipv4", 00:16:17.562 "trsvcid": "$NVMF_PORT", 00:16:17.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:17.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:17.562 "hdgst": ${hdgst:-false}, 00:16:17.562 "ddgst": ${ddgst:-false} 00:16:17.562 }, 00:16:17.562 "method": "bdev_nvme_attach_controller" 00:16:17.562 } 00:16:17.562 EOF 00:16:17.562 )") 00:16:17.562 21:06:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:17.562 21:06:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:17.562 21:06:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:17.562 21:06:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:17.562 "params": { 00:16:17.562 "name": "Nvme1", 00:16:17.562 "trtype": "tcp", 00:16:17.562 "traddr": "10.0.0.2", 00:16:17.562 "adrfam": "ipv4", 00:16:17.562 "trsvcid": "4420", 00:16:17.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:17.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:17.562 "hdgst": false, 00:16:17.562 "ddgst": false 00:16:17.562 }, 00:16:17.562 "method": "bdev_nvme_attach_controller" 00:16:17.562 }' 00:16:17.562 [2024-07-15 21:06:44.659247] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:16:17.562 [2024-07-15 21:06:44.659309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1928463 ] 00:16:17.562 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.562 [2024-07-15 21:06:44.726897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.562 [2024-07-15 21:06:44.792706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.824 Running I/O for 10 seconds... 00:16:27.876 00:16:27.876 Latency(us) 00:16:27.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.876 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:27.876 Verification LBA range: start 0x0 length 0x1000 00:16:27.876 Nvme1n1 : 10.01 8524.69 66.60 0.00 0.00 14963.08 1952.43 27852.80 00:16:27.876 =================================================================================================================== 00:16:27.876 Total : 8524.69 66.60 0.00 0.00 14963.08 1952.43 27852.80 00:16:27.876 21:06:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1930552 00:16:27.876 21:06:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:27.876 21:06:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:27.876 21:06:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:27.876 21:06:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:27.876 21:06:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:27.876 21:06:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:27.876 21:06:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:27.876 { 00:16:27.876 "params": { 00:16:27.876 "name": "Nvme$subsystem", 00:16:27.876 "trtype": "$TEST_TRANSPORT", 00:16:27.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:27.876 "adrfam": "ipv4", 00:16:27.876 "trsvcid": "$NVMF_PORT", 00:16:27.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:27.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:27.876 "hdgst": ${hdgst:-false}, 00:16:27.876 "ddgst": ${ddgst:-false} 00:16:27.876 }, 00:16:27.876 "method": "bdev_nvme_attach_controller" 00:16:27.876 } 00:16:27.876 EOF 00:16:27.876 )") 00:16:27.876 21:06:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:27.876 21:06:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:27.876 [2024-07-15 21:06:55.146513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.876 [2024-07-15 21:06:55.146543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.876 21:06:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:27.876 21:06:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:27.876 21:06:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:27.876 "params": { 00:16:27.876 "name": "Nvme1", 00:16:27.876 "trtype": "tcp", 00:16:27.876 "traddr": "10.0.0.2", 00:16:27.876 "adrfam": "ipv4", 00:16:27.876 "trsvcid": "4420", 00:16:27.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.876 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:27.876 "hdgst": false, 00:16:27.876 "ddgst": false 00:16:27.876 }, 00:16:27.876 "method": "bdev_nvme_attach_controller" 00:16:27.876 }' 00:16:27.876 [2024-07-15 21:06:55.158509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.876 [2024-07-15 21:06:55.158519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.170537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.170546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.182569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.182577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.189549] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:16:28.136 [2024-07-15 21:06:55.189598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1930552 ] 00:16:28.136 [2024-07-15 21:06:55.194599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.194608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.206629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.206638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.136 [2024-07-15 21:06:55.218660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.218668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.230693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.230701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.242723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.242732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.253733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.136 [2024-07-15 21:06:55.254754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.254765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.266785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.266794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.278817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.278827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.290849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.290863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.302878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.302887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.314909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.314918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.317981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.136 [2024-07-15 21:06:55.326939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.326948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.338977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.338993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.351006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.351014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.363034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.363042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.375065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.375073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.387107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.387118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.399140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.399153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.411182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.411193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.136 [2024-07-15 21:06:55.423192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.136 [2024-07-15 21:06:55.423201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.396 [2024-07-15 21:06:55.435220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.396 [2024-07-15 21:06:55.435228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.396 [2024-07-15 21:06:55.447254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.396 [2024-07-15 21:06:55.447262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.396 [2024-07-15 21:06:55.459335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.396 [2024-07-15 21:06:55.459344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.396 [2024-07-15 21:06:55.471315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.396 [2024-07-15 21:06:55.471329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.396 [2024-07-15 21:06:55.483344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.396 [2024-07-15 21:06:55.483353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.396 [2024-07-15 21:06:55.495388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.396 [2024-07-15 21:06:55.495404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.396 Running I/O for 5 seconds... 00:16:28.396 [2024-07-15 21:06:55.507406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.396 [2024-07-15 21:06:55.507416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.396 [2024-07-15 21:06:55.522632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.396 [2024-07-15 21:06:55.522649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.396 [2024-07-15 21:06:55.535163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.396 [2024-07-15 21:06:55.535179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.396 [2024-07-15 21:06:55.548558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.396 [2024-07-15 21:06:55.548574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.396 [2024-07-15 21:06:55.561179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.396 [2024-07-15 21:06:55.561195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.396 [2024-07-15 21:06:55.574802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.396 [2024-07-15 21:06:55.574818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.396 [2024-07-15 21:06:55.587480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.396 [2024-07-15 21:06:55.587495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.396 [2024-07-15 21:06:55.600093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.396 [2024-07-15 21:06:55.600108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.396 [2024-07-15 21:06:55.612842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.396 [2024-07-15 21:06:55.612857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.396 [2024-07-15 21:06:55.625849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.396 [2024-07-15 21:06:55.625864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.396 [2024-07-15 21:06:55.639126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.396 [2024-07-15 21:06:55.639142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.396 [2024-07-15 21:06:55.652128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.396 [2024-07-15 21:06:55.652143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.396 [2024-07-15 21:06:55.664670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.396 [2024-07-15 21:06:55.664685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.396 [2024-07-15 21:06:55.677106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.396 [2024-07-15 21:06:55.677121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.656 [2024-07-15 21:06:55.689450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.656 [2024-07-15 21:06:55.689466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.656 [2024-07-15 21:06:55.702488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.656 [2024-07-15 21:06:55.702503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.656 [2024-07-15 21:06:55.715534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.656 [2024-07-15 21:06:55.715555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.656 [2024-07-15 21:06:55.728491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.656 [2024-07-15 21:06:55.728506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.656 [2024-07-15 21:06:55.741537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.656 [2024-07-15 21:06:55.741552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.656 [2024-07-15 21:06:55.755181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.656 [2024-07-15 21:06:55.755197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.656 [2024-07-15 21:06:55.768349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.656 [2024-07-15 21:06:55.768364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.656 [2024-07-15 21:06:55.781505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.656 [2024-07-15 21:06:55.781520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.656 [2024-07-15 21:06:55.794540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.656 [2024-07-15 21:06:55.794556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.656 [2024-07-15 21:06:55.807387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.656 [2024-07-15 21:06:55.807403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.656 [2024-07-15 21:06:55.820680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.656 [2024-07-15 21:06:55.820695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.656 [2024-07-15 21:06:55.833992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.656 [2024-07-15 21:06:55.834007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.656 [2024-07-15 21:06:55.846929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.656 [2024-07-15 21:06:55.846945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.656 [2024-07-15 21:06:55.860188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.656 [2024-07-15 21:06:55.860203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.656 [2024-07-15 21:06:55.873022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.656 [2024-07-15 21:06:55.873037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.656 [2024-07-15 21:06:55.886199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.656 [2024-07-15 21:06:55.886214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.656 [2024-07-15 21:06:55.899574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.656 [2024-07-15 21:06:55.899590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.656 [2024-07-15 21:06:55.913222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.656 [2024-07-15 21:06:55.913240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.656 [2024-07-15 21:06:55.926422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.656 [2024-07-15 21:06:55.926438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.656 [2024-07-15 21:06:55.939528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.656 [2024-07-15 21:06:55.939542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.916 [2024-07-15 21:06:55.952559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.916 [2024-07-15 21:06:55.952574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.916 [2024-07-15 21:06:55.965820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.916 [2024-07-15 21:06:55.965840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.916 [2024-07-15 21:06:55.978660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.916 [2024-07-15 21:06:55.978675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.916 [2024-07-15 21:06:55.991973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.916 [2024-07-15 21:06:55.991988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.916 [2024-07-15 21:06:56.005070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.916 [2024-07-15 21:06:56.005086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.916 [2024-07-15 21:06:56.018166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.916 [2024-07-15 21:06:56.018181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.916 [2024-07-15 21:06:56.030954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.916 [2024-07-15 21:06:56.030969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.916 [2024-07-15 21:06:56.043908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.916 [2024-07-15 21:06:56.043923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.916 [2024-07-15 21:06:56.056967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.916 [2024-07-15 21:06:56.056982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.916 [2024-07-15 21:06:56.070201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.916 [2024-07-15 21:06:56.070216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.916 [2024-07-15 21:06:56.083185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.916 [2024-07-15 21:06:56.083201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.916 [2024-07-15 21:06:56.096655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.916 [2024-07-15 21:06:56.096670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.916 [2024-07-15 21:06:56.109920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.916 [2024-07-15 21:06:56.109935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.916 [2024-07-15 21:06:56.123490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.916 [2024-07-15 21:06:56.123506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.916 [2024-07-15 21:06:56.136469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.916 [2024-07-15 21:06:56.136484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.916 [2024-07-15 21:06:56.149975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.916 [2024-07-15 21:06:56.149990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.916 [2024-07-15 21:06:56.162976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.916 [2024-07-15 21:06:56.162992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.916 [2024-07-15 21:06:56.176475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.916 [2024-07-15 21:06:56.176490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.916 [2024-07-15 21:06:56.189857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.916 [2024-07-15 21:06:56.189872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.916 [2024-07-15 21:06:56.203426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.916 [2024-07-15 21:06:56.203441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.176 [2024-07-15 21:06:56.216550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.176 [2024-07-15 21:06:56.216570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.176 [2024-07-15 21:06:56.229175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.176 [2024-07-15 21:06:56.229190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.176 [2024-07-15 21:06:56.241844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.176 [2024-07-15 21:06:56.241859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.176 [2024-07-15 21:06:56.254089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.176 [2024-07-15 21:06:56.254104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.176 [2024-07-15 21:06:56.267712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.176 [2024-07-15 21:06:56.267727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.176 [2024-07-15 21:06:56.281029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.176 [2024-07-15 21:06:56.281045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.176 [2024-07-15 21:06:56.294479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.176 [2024-07-15 21:06:56.294493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.177 [2024-07-15 21:06:56.307397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.177 [2024-07-15 21:06:56.307412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.177 [2024-07-15 21:06:56.320758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.177 [2024-07-15 21:06:56.320773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.177 [2024-07-15 21:06:56.334397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.177 [2024-07-15 21:06:56.334411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.177 [2024-07-15 21:06:56.347602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.177 [2024-07-15 21:06:56.347617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.177 [2024-07-15 21:06:56.360523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.177 [2024-07-15 21:06:56.360538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.177 [2024-07-15 21:06:56.373593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.177 [2024-07-15 21:06:56.373608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.177 [2024-07-15 21:06:56.386175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.177 [2024-07-15 21:06:56.386189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.177 [2024-07-15 21:06:56.399217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.177 [2024-07-15 21:06:56.399236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.177 [2024-07-15 21:06:56.412410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.177 [2024-07-15 21:06:56.412425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.177 [2024-07-15 21:06:56.425957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.177 [2024-07-15 21:06:56.425972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.177 [2024-07-15 21:06:56.439336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.177 [2024-07-15 21:06:56.439352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.177 [2024-07-15 21:06:56.452523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.177 [2024-07-15 21:06:56.452539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.177 [2024-07-15 21:06:56.465311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.177 [2024-07-15 21:06:56.465328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.437 [2024-07-15 21:06:56.478624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.437 [2024-07-15 21:06:56.478640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.437 [2024-07-15 21:06:56.492050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.437 [2024-07-15 21:06:56.492066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.437 [2024-07-15 21:06:56.505186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.437 [2024-07-15 21:06:56.505202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.437 [2024-07-15 21:06:56.518264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.437 [2024-07-15 21:06:56.518279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.437 [2024-07-15 21:06:56.531515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.437 [2024-07-15 21:06:56.531530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.437 [2024-07-15 21:06:56.544437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.437 [2024-07-15 21:06:56.544453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.437 [2024-07-15 21:06:56.556817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.437 [2024-07-15 21:06:56.556833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.437 [2024-07-15 21:06:56.570283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.437 [2024-07-15 21:06:56.570299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.437 [2024-07-15 21:06:56.583788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.437 [2024-07-15 21:06:56.583803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.437 [2024-07-15 21:06:56.596989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.437 [2024-07-15 21:06:56.597004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.437 [2024-07-15 21:06:56.609560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.437 [2024-07-15 21:06:56.609575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.437 [2024-07-15 21:06:56.622597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.437 [2024-07-15 21:06:56.622612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.437 [2024-07-15 21:06:56.635440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.437 [2024-07-15 21:06:56.635456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.437 [2024-07-15 21:06:56.647678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.437 [2024-07-15 21:06:56.647694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.437 [2024-07-15 21:06:56.660982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.437 [2024-07-15 21:06:56.660998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.437 [2024-07-15 21:06:56.674285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.438 [2024-07-15 21:06:56.674302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.438 [2024-07-15 21:06:56.687742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.438 [2024-07-15 21:06:56.687758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.438 [2024-07-15 21:06:56.700814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.438 [2024-07-15 21:06:56.700829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.438 [2024-07-15 21:06:56.714145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.438 [2024-07-15 21:06:56.714160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.698 [2024-07-15 21:06:56.727753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.698 [2024-07-15 21:06:56.727768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.698 [2024-07-15 21:06:56.741058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.698 [2024-07-15 21:06:56.741073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.698 [2024-07-15 21:06:56.754129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.698 [2024-07-15 21:06:56.754145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.698 [2024-07-15 21:06:56.766748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.698 [2024-07-15 21:06:56.766764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.698 [2024-07-15 21:06:56.779560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.698 [2024-07-15 21:06:56.779575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.698 [2024-07-15 21:06:56.792854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.698 [2024-07-15 21:06:56.792870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.698 [2024-07-15 21:06:56.805451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.698 [2024-07-15 21:06:56.805466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.698 [2024-07-15 21:06:56.818712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.698 [2024-07-15 21:06:56.818729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.698 [2024-07-15 21:06:56.832424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.698 [2024-07-15 21:06:56.832441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.698 [2024-07-15 21:06:56.844600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.698 [2024-07-15 21:06:56.844616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.698 [2024-07-15 21:06:56.857507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.698 [2024-07-15 21:06:56.857523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.698 [2024-07-15 21:06:56.870705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.698 [2024-07-15 21:06:56.870721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.698 [2024-07-15 21:06:56.883937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.698 [2024-07-15 21:06:56.883952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.698 [2024-07-15 21:06:56.897049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.698 [2024-07-15 21:06:56.897065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.698 [2024-07-15 21:06:56.910103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.698 [2024-07-15 21:06:56.910118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.698 [2024-07-15 21:06:56.922511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.698 [2024-07-15 21:06:56.922527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.698 [2024-07-15 21:06:56.935151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.698 [2024-07-15 21:06:56.935167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.698 [2024-07-15 21:06:56.948284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.698 [2024-07-15 21:06:56.948299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.698 [2024-07-15 21:06:56.961460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.698 [2024-07-15 21:06:56.961476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.698 [2024-07-15 21:06:56.974548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.698 [2024-07-15 21:06:56.974563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.959 [2024-07-15 21:06:56.987968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.959 [2024-07-15 21:06:56.987984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.959 [2024-07-15 21:06:57.001491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.959 [2024-07-15 21:06:57.001506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.959 [2024-07-15 21:06:57.014661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.959 [2024-07-15 21:06:57.014676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.959 [2024-07-15 21:06:57.028265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.959 [2024-07-15 21:06:57.028282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.959 [2024-07-15 21:06:57.041329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.959 [2024-07-15 21:06:57.041345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.959 [2024-07-15 21:06:57.054670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.959 [2024-07-15 21:06:57.054686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.959 [2024-07-15 21:06:57.068063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.959 [2024-07-15 21:06:57.068079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.959 [2024-07-15 21:06:57.081479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.959 [2024-07-15 21:06:57.081495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.959 [2024-07-15 21:06:57.094446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.959 [2024-07-15 21:06:57.094461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.959 [2024-07-15 21:06:57.108052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.959 [2024-07-15 21:06:57.108067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.959 [2024-07-15 21:06:57.120644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.959 [2024-07-15 21:06:57.120660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.959 [2024-07-15 21:06:57.134251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.959 [2024-07-15 21:06:57.134266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.959 [2024-07-15 21:06:57.146779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.959 [2024-07-15 21:06:57.146795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.959 [2024-07-15 21:06:57.159742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.959 [2024-07-15 21:06:57.159758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.959 [2024-07-15 21:06:57.172302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.959 [2024-07-15 21:06:57.172317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.959 [2024-07-15 21:06:57.185515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.959 [2024-07-15 21:06:57.185530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.959 [2024-07-15 21:06:57.198816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.959 [2024-07-15 21:06:57.198835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.959 [2024-07-15 21:06:57.212201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.959 [2024-07-15 21:06:57.212217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.959 [2024-07-15 21:06:57.225402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.959 [2024-07-15 21:06:57.225418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.959 [2024-07-15 21:06:57.238711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.959 [2024-07-15 21:06:57.238727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 [2024-07-15 21:06:57.252105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-07-15 21:06:57.252121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 [2024-07-15 21:06:57.265310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-07-15 21:06:57.265325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 [2024-07-15 21:06:57.278535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-07-15 21:06:57.278550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 [2024-07-15 21:06:57.290886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-07-15 21:06:57.290901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 [2024-07-15 21:06:57.303967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-07-15 21:06:57.303982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 [2024-07-15 21:06:57.316630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-07-15 21:06:57.316644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 [2024-07-15 21:06:57.329348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-07-15 21:06:57.329364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 [2024-07-15 21:06:57.342815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-07-15 21:06:57.342830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 [2024-07-15 21:06:57.355422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-07-15 21:06:57.355437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 [2024-07-15 21:06:57.368520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-07-15 21:06:57.368535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 [2024-07-15 21:06:57.382042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-07-15 21:06:57.382057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 [2024-07-15 21:06:57.395384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-07-15 21:06:57.395398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 [2024-07-15 21:06:57.407989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-07-15 21:06:57.408003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 [2024-07-15 21:06:57.420366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-07-15 21:06:57.420381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 [2024-07-15 21:06:57.433867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-07-15 21:06:57.433882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 [2024-07-15 21:06:57.446529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-07-15 21:06:57.446548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 [2024-07-15 21:06:57.459382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-07-15 21:06:57.459398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 [2024-07-15 21:06:57.472790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-07-15 21:06:57.472805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 [2024-07-15 21:06:57.486385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-07-15 21:06:57.486399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 [2024-07-15 21:06:57.499006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-07-15 21:06:57.499021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.482 [2024-07-15 21:06:57.511756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.482 [2024-07-15 21:06:57.511771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.482 [2024-07-15 21:06:57.524795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.482 [2024-07-15 21:06:57.524810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.482 [2024-07-15 21:06:57.538174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.482 [2024-07-15 21:06:57.538189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.482 [2024-07-15 21:06:57.550920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.482 [2024-07-15 21:06:57.550934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.482 [2024-07-15 21:06:57.563910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.482 [2024-07-15 21:06:57.563925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.482 [2024-07-15 21:06:57.576685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.482 [2024-07-15 21:06:57.576700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.482 [2024-07-15 21:06:57.589170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.482 [2024-07-15 21:06:57.589185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.482 [2024-07-15 21:06:57.602630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.482 [2024-07-15 21:06:57.602645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.482 [2024-07-15 21:06:57.615575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.482 [2024-07-15 21:06:57.615590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.482 [2024-07-15 21:06:57.628492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.482 [2024-07-15 21:06:57.628507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.482 [2024-07-15 21:06:57.640888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.482 [2024-07-15 21:06:57.640904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.482 [2024-07-15 21:06:57.654220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.482 [2024-07-15 21:06:57.654240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.482 [2024-07-15 21:06:57.667540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.482 [2024-07-15 21:06:57.667555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.482 [2024-07-15 21:06:57.680667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.482 [2024-07-15 21:06:57.680683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.482 [2024-07-15 21:06:57.694314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.482 [2024-07-15 21:06:57.694333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.482 [2024-07-15 21:06:57.707556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.482 [2024-07-15 21:06:57.707571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.482 [2024-07-15 21:06:57.720158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.482 [2024-07-15 21:06:57.720173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.482 [2024-07-15 21:06:57.733039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.482 [2024-07-15 21:06:57.733054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.482 [2024-07-15 21:06:57.746221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.482 [2024-07-15 21:06:57.746240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.482 [2024-07-15 21:06:57.759190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.482 [2024-07-15 21:06:57.759205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.743 [2024-07-15 21:06:57.772049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.743 [2024-07-15 21:06:57.772065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.743 [2024-07-15 21:06:57.784498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.743 [2024-07-15 21:06:57.784513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.743 [2024-07-15 21:06:57.797977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.743 [2024-07-15 21:06:57.797992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.743 [2024-07-15 21:06:57.810797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.743 [2024-07-15 21:06:57.810813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.743 [2024-07-15 21:06:57.823305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.743 [2024-07-15 21:06:57.823321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.743 [2024-07-15 21:06:57.836389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.743 [2024-07-15 21:06:57.836404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.743 [2024-07-15 21:06:57.849512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.743 [2024-07-15 21:06:57.849527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.743 [2024-07-15 21:06:57.862823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.743 [2024-07-15 21:06:57.862837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.743 [2024-07-15 21:06:57.876386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.743 [2024-07-15 21:06:57.876401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.743 [2024-07-15 21:06:57.889620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.743 [2024-07-15 21:06:57.889635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.743 [2024-07-15 21:06:57.903100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.743 [2024-07-15 21:06:57.903114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.743 [2024-07-15 21:06:57.915223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.743 [2024-07-15 21:06:57.915244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.743 [2024-07-15 21:06:57.927695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.743 [2024-07-15 21:06:57.927710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.743 [2024-07-15 21:06:57.941361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.743 [2024-07-15 21:06:57.941380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.743 [2024-07-15 21:06:57.954203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.743 [2024-07-15 21:06:57.954218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.743 [2024-07-15 21:06:57.967805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.743 [2024-07-15 21:06:57.967820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.743 [2024-07-15 21:06:57.981428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.743 [2024-07-15 21:06:57.981443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.743 [2024-07-15 21:06:57.994836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.743 [2024-07-15 21:06:57.994851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.743 [2024-07-15 21:06:58.007223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.744 [2024-07-15 21:06:58.007243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.744 [2024-07-15 21:06:58.020689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.744 [2024-07-15 21:06:58.020705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.033249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.033264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.045959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.045974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.059122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.059137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.071484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.071499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.084023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.084038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.096515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.096530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.109170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.109186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.122380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.122395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.134899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.134915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.148445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.148461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.161879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.161894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.174639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.174655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.188137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.188156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.200398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.200414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.213152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.213167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.225603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.225618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.239159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.239175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.252612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.252627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.265627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.265642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.279317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.279332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.005 [2024-07-15 21:06:58.292812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.005 [2024-07-15 21:06:58.292827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.266 [2024-07-15 21:06:58.305568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.266 [2024-07-15 21:06:58.305584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.266 [2024-07-15 21:06:58.318898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.266 [2024-07-15 21:06:58.318914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.266 [2024-07-15 21:06:58.332439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.266 [2024-07-15 21:06:58.332454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.266 [2024-07-15 21:06:58.345574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.266 [2024-07-15 21:06:58.345589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.266 [2024-07-15 21:06:58.358742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.266 [2024-07-15 21:06:58.358758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.266 [2024-07-15 21:06:58.371901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.266 [2024-07-15 21:06:58.371916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.266 [2024-07-15 21:06:58.385402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.266 [2024-07-15 21:06:58.385418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.266 [2024-07-15 21:06:58.398978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.266 [2024-07-15 21:06:58.398993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.266 [2024-07-15 21:06:58.411959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.266 [2024-07-15 21:06:58.411974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.266 [2024-07-15 21:06:58.425534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.266 [2024-07-15 21:06:58.425549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.266 [2024-07-15 21:06:58.438542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.266 [2024-07-15 21:06:58.438558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.266 [2024-07-15 21:06:58.451302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.266 [2024-07-15 21:06:58.451317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.266 [2024-07-15 21:06:58.463954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.266 [2024-07-15 21:06:58.463970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.267 [2024-07-15 21:06:58.477243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.267 [2024-07-15 21:06:58.477259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.267 [2024-07-15 21:06:58.489798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.267 [2024-07-15 21:06:58.489814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.267 [2024-07-15 21:06:58.502899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.267 [2024-07-15 21:06:58.502914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.267 [2024-07-15 21:06:58.516317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.267 [2024-07-15 21:06:58.516332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.267 [2024-07-15 21:06:58.529656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.267 [2024-07-15 21:06:58.529672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.267 [2024-07-15 21:06:58.542821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.267 [2024-07-15 21:06:58.542837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.527 [2024-07-15 21:06:58.555910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.527 [2024-07-15 21:06:58.555926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.527 [2024-07-15 21:06:58.569271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.527 [2024-07-15 21:06:58.569287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.527 [2024-07-15 21:06:58.581981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.527 [2024-07-15 21:06:58.581995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.527 [2024-07-15 21:06:58.595432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.527 [2024-07-15 21:06:58.595447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.527 [2024-07-15 21:06:58.609191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.527 [2024-07-15 21:06:58.609207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.527 [2024-07-15 21:06:58.622401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.527 [2024-07-15 21:06:58.622418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.527 [2024-07-15 21:06:58.635684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.527 [2024-07-15 21:06:58.635699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.527 [2024-07-15 21:06:58.648632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.527 [2024-07-15 21:06:58.648647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.527 [2024-07-15 21:06:58.661757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.527 [2024-07-15 21:06:58.661772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.527 [2024-07-15 21:06:58.675160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.527 [2024-07-15 21:06:58.675175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.527 [2024-07-15 21:06:58.687734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.527 [2024-07-15 21:06:58.687750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.527 [2024-07-15 21:06:58.700896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.528 [2024-07-15 21:06:58.700911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.528 [2024-07-15 21:06:58.714691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.528 [2024-07-15 21:06:58.714706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.528 [2024-07-15 21:06:58.727347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.528 [2024-07-15 21:06:58.727363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.528 [2024-07-15 21:06:58.740334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.528 [2024-07-15 21:06:58.740350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.528 [2024-07-15 21:06:58.753170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.528 [2024-07-15 21:06:58.753186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.528 [2024-07-15 21:06:58.766409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.528 [2024-07-15 21:06:58.766425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.528 [2024-07-15 21:06:58.778840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.528 [2024-07-15 21:06:58.778855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.528 [2024-07-15 21:06:58.792008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.528 [2024-07-15 21:06:58.792023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.528 [2024-07-15 21:06:58.804893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.528 [2024-07-15 21:06:58.804908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:58.817801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:58.817817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:58.831032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:58.831048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:58.844011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:58.844027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:58.856784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:58.856800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:58.870130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:58.870145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:58.883283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:58.883299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:58.896426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:58.896441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:58.909140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:58.909155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:58.921481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:58.921496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:58.934378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:58.934394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:58.947764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:58.947779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:58.960737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:58.960752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:58.973144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:58.973160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:58.986579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:58.986594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:58.998936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:58.998951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:59.012341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:59.012356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:59.024958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:59.024973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:59.037867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:59.037882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:59.051005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:59.051020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:59.064385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:59.064400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.789 [2024-07-15 21:06:59.077740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.789 [2024-07-15 21:06:59.077755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.050 [2024-07-15 21:06:59.091415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.050 [2024-07-15 21:06:59.091431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.050 [2024-07-15 21:06:59.104738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.050 [2024-07-15 21:06:59.104753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.050 [2024-07-15 21:06:59.117964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.050 [2024-07-15 21:06:59.117979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.050 [2024-07-15 21:06:59.130897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.050 [2024-07-15 21:06:59.130911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.050 [2024-07-15 21:06:59.144426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.050 [2024-07-15 21:06:59.144441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.050 [2024-07-15 21:06:59.157419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.050 [2024-07-15 21:06:59.157435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.050 [2024-07-15 21:06:59.170904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.050 [2024-07-15 21:06:59.170923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.050 [2024-07-15 21:06:59.184529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.050 [2024-07-15 21:06:59.184544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.050 [2024-07-15 21:06:59.197167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.050 [2024-07-15 21:06:59.197182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.050 [2024-07-15 21:06:59.209686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.050 [2024-07-15 21:06:59.209700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.050 [2024-07-15 21:06:59.222489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.050 [2024-07-15 21:06:59.222505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.050 [2024-07-15 21:06:59.235034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.050 [2024-07-15 21:06:59.235050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.050 [2024-07-15 21:06:59.247662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.050 [2024-07-15 21:06:59.247677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.050 [2024-07-15 21:06:59.260219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.050 [2024-07-15 21:06:59.260238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.050 [2024-07-15 21:06:59.273847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.050 [2024-07-15 21:06:59.273862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.050 [2024-07-15 21:06:59.286277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.050 [2024-07-15 21:06:59.286292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.050 [2024-07-15 21:06:59.298783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.050 [2024-07-15 21:06:59.298798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.050 [2024-07-15 21:06:59.312368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.050 [2024-07-15 21:06:59.312383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.050 [2024-07-15 21:06:59.324900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.050 [2024-07-15 21:06:59.324915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.050 [2024-07-15 21:06:59.337254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.050 [2024-07-15 21:06:59.337269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.311 [2024-07-15 21:06:59.350724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.311 [2024-07-15 21:06:59.350739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.311 [2024-07-15 21:06:59.363930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.311 [2024-07-15 21:06:59.363945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.311 [2024-07-15 21:06:59.377123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.311 [2024-07-15 21:06:59.377138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.311 [2024-07-15 21:06:59.390345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.311 [2024-07-15 21:06:59.390359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.311 [2024-07-15 21:06:59.403022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.311 [2024-07-15 21:06:59.403037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.311 [2024-07-15 21:06:59.416789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.311 [2024-07-15 21:06:59.416811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.311 [2024-07-15 21:06:59.429637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.311 [2024-07-15 21:06:59.429652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.311 [2024-07-15 21:06:59.442903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.311 [2024-07-15 21:06:59.442918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.311 [2024-07-15 21:06:59.456009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.311 [2024-07-15 21:06:59.456024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.311 [2024-07-15 21:06:59.469441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.311 [2024-07-15 21:06:59.469456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.311 [2024-07-15 21:06:59.483038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.311 [2024-07-15 21:06:59.483053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.311 [2024-07-15 21:06:59.496532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.311 [2024-07-15 21:06:59.496548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.311 [2024-07-15 21:06:59.510084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.311 [2024-07-15 21:06:59.510100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.311 [2024-07-15 21:06:59.523196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.311 [2024-07-15 21:06:59.523212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.311 [2024-07-15 21:06:59.536121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.311 [2024-07-15 21:06:59.536136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.311 [2024-07-15 21:06:59.548071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.311 [2024-07-15 21:06:59.548086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.311 [2024-07-15 21:06:59.561181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.311 [2024-07-15 21:06:59.561196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.311 [2024-07-15 21:06:59.574275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.311 [2024-07-15 21:06:59.574290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.311 [2024-07-15 21:06:59.586813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.312 [2024-07-15 21:06:59.586828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.312 [2024-07-15 21:06:59.599892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.312 [2024-07-15 21:06:59.599908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.572 [2024-07-15 21:06:59.612992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.572 [2024-07-15 21:06:59.613007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.572 [2024-07-15 21:06:59.625984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.572 [2024-07-15 21:06:59.625999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.572 [2024-07-15 21:06:59.639032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.572 [2024-07-15 21:06:59.639047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.572 [2024-07-15 21:06:59.652468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.572 [2024-07-15 21:06:59.652483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.572 [2024-07-15 21:06:59.665728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.572 [2024-07-15 21:06:59.665747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.572 [2024-07-15 21:06:59.679148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.572 [2024-07-15 21:06:59.679163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.572 [2024-07-15 21:06:59.692589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.572 [2024-07-15 21:06:59.692604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.572 [2024-07-15 21:06:59.705753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.572 [2024-07-15 21:06:59.705769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.572 [2024-07-15 21:06:59.719322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.572 [2024-07-15 21:06:59.719338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.572 [2024-07-15 21:06:59.732071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.572 [2024-07-15 21:06:59.732087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.572 [2024-07-15 21:06:59.744816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.572 [2024-07-15 21:06:59.744831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.572 [2024-07-15 21:06:59.757214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.572 [2024-07-15 21:06:59.757228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.572 [2024-07-15 21:06:59.770191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.572 [2024-07-15 21:06:59.770206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.572 [2024-07-15 21:06:59.783006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.572 [2024-07-15 21:06:59.783021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.572 [2024-07-15 21:06:59.795997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.573 [2024-07-15 21:06:59.796011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.573 [2024-07-15 21:06:59.809007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.573 [2024-07-15 21:06:59.809022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.573 [2024-07-15 21:06:59.822284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.573 [2024-07-15 21:06:59.822299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.573 [2024-07-15 21:06:59.835391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.573 [2024-07-15 21:06:59.835406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.573 [2024-07-15 21:06:59.848281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.573 [2024-07-15 21:06:59.848295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.834 [2024-07-15 21:06:59.861757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.834 [2024-07-15 21:06:59.861773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.834 [2024-07-15 21:06:59.874961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.834 [2024-07-15 21:06:59.874977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.834 [2024-07-15 21:06:59.888110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.834 [2024-07-15 21:06:59.888126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.834 [2024-07-15 21:06:59.900944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.834 [2024-07-15 21:06:59.900959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.834 [2024-07-15 21:06:59.914284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.834 [2024-07-15 21:06:59.914303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.834 [2024-07-15 21:06:59.927489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.834 [2024-07-15 21:06:59.927504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.834 [2024-07-15 21:06:59.940487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.834 [2024-07-15 21:06:59.940503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.834 [2024-07-15 21:06:59.953263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.834 [2024-07-15 21:06:59.953279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.834 [2024-07-15 21:06:59.965524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.834 [2024-07-15 21:06:59.965539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.834 [2024-07-15 21:06:59.978756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.834 [2024-07-15 21:06:59.978771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.834 [2024-07-15 21:06:59.991708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.834 [2024-07-15 21:06:59.991723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.834 [2024-07-15 21:07:00.005887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.834 [2024-07-15 21:07:00.005904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.834 [2024-07-15 21:07:00.019565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.834 [2024-07-15 21:07:00.019582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.834 [2024-07-15 21:07:00.032995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.834 [2024-07-15 21:07:00.033011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.834 [2024-07-15 21:07:00.046236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.834 [2024-07-15 21:07:00.046252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.834 [2024-07-15 21:07:00.059156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.834 [2024-07-15 21:07:00.059172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.834 [2024-07-15 21:07:00.072636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.834 [2024-07-15 21:07:00.072652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.834 [2024-07-15 21:07:00.085334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.834 [2024-07-15 21:07:00.085351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.834 [2024-07-15 21:07:00.098760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.834 [2024-07-15 21:07:00.098777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.834 [2024-07-15 21:07:00.111837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.834 [2024-07-15 21:07:00.111853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.095 [2024-07-15 21:07:00.125081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.095 [2024-07-15 21:07:00.125097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.095 [2024-07-15 21:07:00.138309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.095 [2024-07-15 21:07:00.138325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.095 [2024-07-15 21:07:00.151910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.095 [2024-07-15 21:07:00.151926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.095 [2024-07-15 21:07:00.164625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.095 [2024-07-15 21:07:00.164645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.095 [2024-07-15 21:07:00.178265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.095 [2024-07-15 21:07:00.178282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.095 [2024-07-15 21:07:00.191704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.095 [2024-07-15 21:07:00.191719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.095 [2024-07-15 21:07:00.204287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.095 [2024-07-15 21:07:00.204302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.095 [2024-07-15 21:07:00.216956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.095 [2024-07-15 21:07:00.216972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.095 [2024-07-15 21:07:00.230213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.095 [2024-07-15 21:07:00.230227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.095 [2024-07-15 21:07:00.243347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.095 [2024-07-15 21:07:00.243363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.095 [2024-07-15 21:07:00.256568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.095 [2024-07-15 21:07:00.256583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.095 [2024-07-15 21:07:00.269279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.095 [2024-07-15 21:07:00.269294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.095 [2024-07-15 21:07:00.282417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.095 [2024-07-15 21:07:00.282432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.095 [2024-07-15 21:07:00.295773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.095 [2024-07-15 21:07:00.295788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.095 [2024-07-15 21:07:00.309203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.095 [2024-07-15 21:07:00.309219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.095 [2024-07-15 21:07:00.322548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.095 [2024-07-15 21:07:00.322563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.095 [2024-07-15 21:07:00.335483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.095 [2024-07-15 21:07:00.335498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.095 [2024-07-15 21:07:00.348480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.095 [2024-07-15 21:07:00.348495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.095 [2024-07-15 21:07:00.361146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.095 [2024-07-15 21:07:00.361162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.095 [2024-07-15 21:07:00.373592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.095 [2024-07-15 21:07:00.373608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.356 [2024-07-15 21:07:00.386291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.356 [2024-07-15 21:07:00.386307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.356 [2024-07-15 21:07:00.398978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.356 [2024-07-15 21:07:00.398993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.356 [2024-07-15 21:07:00.412048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.356 [2024-07-15 21:07:00.412063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.356 [2024-07-15 21:07:00.425103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.356 [2024-07-15 21:07:00.425119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.356 [2024-07-15 21:07:00.438364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.356 [2024-07-15 21:07:00.438380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.356 [2024-07-15 21:07:00.451840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.356 [2024-07-15 21:07:00.451856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.357 [2024-07-15 21:07:00.464858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.357 [2024-07-15 21:07:00.464873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.357 [2024-07-15 21:07:00.477389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.357 [2024-07-15 21:07:00.477404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.357 [2024-07-15 21:07:00.490584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.357 [2024-07-15 21:07:00.490598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.357 [2024-07-15 21:07:00.503657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.357 [2024-07-15 21:07:00.503673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.357 [2024-07-15 21:07:00.516498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.357 [2024-07-15 21:07:00.516514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.357 00:16:33.357 Latency(us) 00:16:33.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.357 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:33.357 Nvme1n1 : 5.01 19444.04 151.91 0.00 0.00 6575.94 2921.81 16930.13 00:16:33.357 =================================================================================================================== 00:16:33.357 Total : 19444.04 151.91 0.00 0.00 6575.94 2921.81 16930.13 00:16:33.357 [2024-07-15 21:07:00.525722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.357 [2024-07-15 21:07:00.525737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.357 [2024-07-15 21:07:00.537753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.357 [2024-07-15 21:07:00.537765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.357 [2024-07-15 21:07:00.549789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.357 [2024-07-15 21:07:00.549801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.357 [2024-07-15 21:07:00.561819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.357 [2024-07-15 21:07:00.561832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.357 [2024-07-15 21:07:00.573847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.357 [2024-07-15 21:07:00.573856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.357 [2024-07-15 21:07:00.585876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.357 [2024-07-15 21:07:00.585887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.357 [2024-07-15 21:07:00.597907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.357 [2024-07-15 21:07:00.597916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.357 [2024-07-15 21:07:00.609939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.357 [2024-07-15 21:07:00.609951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.357 [2024-07-15 21:07:00.621968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.357 [2024-07-15 21:07:00.621978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.357 [2024-07-15 21:07:00.633999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.357 [2024-07-15 21:07:00.634010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.618 [2024-07-15 21:07:00.646028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.618 [2024-07-15 21:07:00.646039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1930552) - No such process 00:16:33.618 21:07:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1930552 00:16:33.618 21:07:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.618 21:07:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.618 21:07:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:33.618 21:07:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.618 21:07:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:33.618 21:07:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.618 21:07:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:33.618 delay0 00:16:33.618 21:07:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.618 21:07:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:33.618 21:07:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.618 21:07:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:33.618 21:07:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.618 21:07:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:33.618 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.618 [2024-07-15 21:07:00.764896] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:41.753 Initializing NVMe Controllers 00:16:41.753 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:41.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:41.753 Initialization complete. Launching workers. 00:16:41.753 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 5558 00:16:41.753 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 5830, failed to submit 48 00:16:41.753 success 5672, unsuccess 158, failed 0 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:41.753 rmmod nvme_tcp 00:16:41.753 rmmod nvme_fabrics 00:16:41.753 rmmod nvme_keyring 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1928334 ']' 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1928334 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1928334 ']' 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1928334 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1928334 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1928334' 00:16:41.753 killing process with pid 1928334 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1928334 00:16:41.753 21:07:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1928334 00:16:41.754 21:07:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:41.754 21:07:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:41.754 21:07:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:41.754 21:07:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:41.754 21:07:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:41.754 21:07:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.754 21:07:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.754 21:07:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.693 21:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:42.693 00:16:42.693 real 0m34.434s 00:16:42.693 user 0m45.513s 00:16:42.693 sys 0m11.154s 00:16:42.693 21:07:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:42.694 21:07:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:42.694 ************************************ 00:16:42.694 END TEST nvmf_zcopy 00:16:42.694 ************************************ 00:16:42.694 21:07:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:42.694 21:07:09 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:42.694 21:07:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:42.694 21:07:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:42.694 21:07:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:42.954 ************************************ 00:16:42.954 START TEST nvmf_nmic 00:16:42.954 ************************************ 00:16:42.954 21:07:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:42.954 * Looking for test storage... 00:16:42.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:42.954 21:07:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:42.954 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:42.954 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.954 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.954 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.954 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.954 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.954 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.954 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.954 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.954 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:42.955 21:07:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:51.089 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:51.089 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:51.089 Found net devices under 0000:31:00.0: cvl_0_0 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:51.089 Found net devices under 0000:31:00.1: cvl_0_1 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:51.089 21:07:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:51.089 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:51.089 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:51.089 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:51.089 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:51.089 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:51.089 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:51.089 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:51.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:16:51.089 00:16:51.089 --- 10.0.0.2 ping statistics --- 00:16:51.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.089 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:16:51.089 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:51.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:16:51.089 00:16:51.089 --- 10.0.0.1 ping statistics --- 00:16:51.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.089 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:16:51.089 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.089 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:51.089 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:51.089 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.089 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:51.089 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:51.089 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.090 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:51.090 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:51.090 21:07:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:51.090 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:51.090 21:07:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:51.090 21:07:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.090 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1937720 00:16:51.090 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1937720 00:16:51.090 21:07:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:51.090 21:07:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1937720 ']' 00:16:51.090 21:07:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.090 21:07:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.090 21:07:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.090 21:07:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.090 21:07:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.090 [2024-07-15 21:07:18.359066] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:16:51.090 [2024-07-15 21:07:18.359122] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.349 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.349 [2024-07-15 21:07:18.437584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:51.349 [2024-07-15 21:07:18.509346] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.349 [2024-07-15 21:07:18.509383] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.349 [2024-07-15 21:07:18.509394] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.349 [2024-07-15 21:07:18.509401] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.349 [2024-07-15 21:07:18.509406] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.349 [2024-07-15 21:07:18.509472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.349 [2024-07-15 21:07:18.509587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.349 [2024-07-15 21:07:18.509743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.349 [2024-07-15 21:07:18.509744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:51.920 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.920 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:16:51.920 21:07:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:51.920 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:51.920 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.920 21:07:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.920 21:07:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:51.920 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.920 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.920 [2024-07-15 21:07:19.178892] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.920 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.920 21:07:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:51.920 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.920 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.920 Malloc0 00:16:51.920 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.920 21:07:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:51.920 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.920 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:52.182 [2024-07-15 21:07:19.238172] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:52.182 test case1: single bdev can't be used in multiple subsystems 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:52.182 [2024-07-15 21:07:19.274110] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:52.182 [2024-07-15 21:07:19.274130] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:52.182 [2024-07-15 21:07:19.274137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:52.182 request: 00:16:52.182 { 00:16:52.182 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:52.182 "namespace": { 00:16:52.182 "bdev_name": "Malloc0", 00:16:52.182 "no_auto_visible": false 00:16:52.182 }, 00:16:52.182 "method": "nvmf_subsystem_add_ns", 00:16:52.182 "req_id": 1 00:16:52.182 } 00:16:52.182 Got JSON-RPC error response 00:16:52.182 response: 00:16:52.182 { 00:16:52.182 "code": -32602, 00:16:52.182 "message": "Invalid parameters" 00:16:52.182 } 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:52.182 Adding namespace failed - expected result. 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:52.182 test case2: host connect to nvmf target in multiple paths 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:52.182 [2024-07-15 21:07:19.286240] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.182 21:07:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:53.566 21:07:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:55.475 21:07:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:55.475 21:07:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:16:55.475 21:07:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:55.475 21:07:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:55.475 21:07:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:16:57.398 21:07:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:57.398 21:07:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:57.398 21:07:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:57.398 21:07:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:57.398 21:07:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:57.398 21:07:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:16:57.398 21:07:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:57.398 [global] 00:16:57.398 thread=1 00:16:57.398 invalidate=1 00:16:57.398 rw=write 00:16:57.398 time_based=1 00:16:57.398 runtime=1 00:16:57.398 ioengine=libaio 00:16:57.398 direct=1 00:16:57.398 bs=4096 00:16:57.398 iodepth=1 00:16:57.398 norandommap=0 00:16:57.398 numjobs=1 00:16:57.398 00:16:57.398 verify_dump=1 00:16:57.398 verify_backlog=512 00:16:57.398 verify_state_save=0 00:16:57.398 do_verify=1 00:16:57.398 verify=crc32c-intel 00:16:57.398 [job0] 00:16:57.398 filename=/dev/nvme0n1 00:16:57.398 Could not set queue depth (nvme0n1) 00:16:57.658 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:57.658 fio-3.35 00:16:57.658 Starting 1 thread 00:16:59.036 00:16:59.036 job0: (groupid=0, jobs=1): err= 0: pid=1939209: Mon Jul 15 21:07:25 2024 00:16:59.036 read: IOPS=16, BW=66.9KiB/s (68.5kB/s)(68.0KiB/1017msec) 00:16:59.036 slat (nsec): min=9717, max=25860, avg=24479.71, stdev=3808.51 00:16:59.036 clat (usec): min=40971, max=42031, avg=41839.16, stdev=322.74 00:16:59.036 lat (usec): min=40996, max=42057, avg=41863.64, stdev=322.73 00:16:59.036 clat percentiles (usec): 00:16:59.036 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:16:59.036 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:16:59.036 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:59.036 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:59.036 | 99.99th=[42206] 00:16:59.036 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:16:59.036 slat (usec): min=9, max=27255, avg=79.90, stdev=1203.41 00:16:59.036 clat (usec): min=270, max=2567, avg=510.73, stdev=115.16 00:16:59.036 lat (usec): min=302, max=29823, avg=590.63, stdev=1296.72 00:16:59.036 clat percentiles (usec): 00:16:59.036 | 1.00th=[ 322], 5.00th=[ 392], 10.00th=[ 408], 20.00th=[ 424], 00:16:59.036 | 30.00th=[ 457], 40.00th=[ 510], 50.00th=[ 529], 60.00th=[ 545], 00:16:59.036 | 70.00th=[ 562], 80.00th=[ 570], 90.00th=[ 578], 95.00th=[ 586], 00:16:59.036 | 99.00th=[ 619], 99.50th=[ 627], 99.90th=[ 2573], 99.95th=[ 2573], 00:16:59.036 | 99.99th=[ 2573] 00:16:59.036 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:59.036 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:59.036 lat (usec) : 500=34.22%, 750=62.38% 00:16:59.036 lat (msec) : 4=0.19%, 50=3.21% 00:16:59.036 cpu : usr=0.98%, sys=0.98%, ctx=532, majf=0, minf=1 00:16:59.036 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.036 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.036 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.036 00:16:59.036 Run status group 0 (all jobs): 00:16:59.036 READ: bw=66.9KiB/s (68.5kB/s), 66.9KiB/s-66.9KiB/s (68.5kB/s-68.5kB/s), io=68.0KiB (69.6kB), run=1017-1017msec 00:16:59.036 WRITE: bw=2014KiB/s (2062kB/s), 2014KiB/s-2014KiB/s (2062kB/s-2062kB/s), io=2048KiB (2097kB), run=1017-1017msec 00:16:59.036 00:16:59.036 Disk stats (read/write): 00:16:59.036 nvme0n1: ios=39/512, merge=0/0, ticks=1553/262, in_queue=1815, util=98.90% 00:16:59.036 21:07:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:59.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:59.036 rmmod nvme_tcp 00:16:59.036 rmmod nvme_fabrics 00:16:59.036 rmmod nvme_keyring 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1937720 ']' 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1937720 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1937720 ']' 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1937720 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1937720 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1937720' 00:16:59.036 killing process with pid 1937720 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1937720 00:16:59.036 21:07:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1937720 00:16:59.297 21:07:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:59.297 21:07:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:59.297 21:07:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:59.297 21:07:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:59.297 21:07:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:59.297 21:07:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.297 21:07:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.297 21:07:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.838 21:07:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:01.838 00:17:01.838 real 0m18.532s 00:17:01.838 user 0m44.853s 00:17:01.838 sys 0m6.785s 00:17:01.838 21:07:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:01.838 21:07:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:01.838 ************************************ 00:17:01.838 END TEST nvmf_nmic 00:17:01.838 ************************************ 00:17:01.838 21:07:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:01.838 21:07:28 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:01.838 21:07:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:01.838 21:07:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:01.838 21:07:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:01.838 ************************************ 00:17:01.838 START TEST nvmf_fio_target 00:17:01.838 ************************************ 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:01.838 * Looking for test storage... 00:17:01.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:01.838 21:07:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:09.966 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:09.966 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:09.966 Found net devices under 0000:31:00.0: cvl_0_0 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:09.966 Found net devices under 0000:31:00.1: cvl_0_1 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:09.966 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:09.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:17:09.967 00:17:09.967 --- 10.0.0.2 ping statistics --- 00:17:09.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.967 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:09.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:17:09.967 00:17:09.967 --- 10.0.0.1 ping statistics --- 00:17:09.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.967 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1944153 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1944153 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1944153 ']' 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.967 21:07:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.967 [2024-07-15 21:07:36.879351] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:17:09.967 [2024-07-15 21:07:36.879417] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.967 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.967 [2024-07-15 21:07:36.960133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:09.967 [2024-07-15 21:07:37.032068] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.967 [2024-07-15 21:07:37.032110] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.967 [2024-07-15 21:07:37.032117] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.967 [2024-07-15 21:07:37.032124] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.967 [2024-07-15 21:07:37.032130] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.967 [2024-07-15 21:07:37.032268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.967 [2024-07-15 21:07:37.032411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.967 [2024-07-15 21:07:37.032568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.967 [2024-07-15 21:07:37.032569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:10.620 21:07:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.620 21:07:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:17:10.620 21:07:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:10.620 21:07:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:10.620 21:07:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.620 21:07:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.620 21:07:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:10.620 [2024-07-15 21:07:37.839179] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.620 21:07:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:10.880 21:07:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:10.880 21:07:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:11.140 21:07:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:11.140 21:07:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:11.140 21:07:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:11.140 21:07:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:11.399 21:07:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:11.399 21:07:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:11.658 21:07:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:11.658 21:07:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:11.658 21:07:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:11.917 21:07:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:11.917 21:07:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:12.176 21:07:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:12.176 21:07:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:12.176 21:07:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:12.436 21:07:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:12.436 21:07:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:12.696 21:07:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:12.696 21:07:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:12.696 21:07:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:12.954 [2024-07-15 21:07:40.101116] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.954 21:07:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:13.212 21:07:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:13.213 21:07:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:15.115 21:07:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:15.115 21:07:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:17:15.115 21:07:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:15.115 21:07:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:17:15.115 21:07:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:17:15.115 21:07:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:17.019 21:07:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:17.019 21:07:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:17.019 21:07:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:17.019 21:07:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:17.019 21:07:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:17.019 21:07:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:17.019 21:07:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:17.019 [global] 00:17:17.019 thread=1 00:17:17.019 invalidate=1 00:17:17.019 rw=write 00:17:17.019 time_based=1 00:17:17.019 runtime=1 00:17:17.019 ioengine=libaio 00:17:17.019 direct=1 00:17:17.019 bs=4096 00:17:17.019 iodepth=1 00:17:17.019 norandommap=0 00:17:17.019 numjobs=1 00:17:17.019 00:17:17.019 verify_dump=1 00:17:17.019 verify_backlog=512 00:17:17.019 verify_state_save=0 00:17:17.019 do_verify=1 00:17:17.019 verify=crc32c-intel 00:17:17.019 [job0] 00:17:17.019 filename=/dev/nvme0n1 00:17:17.019 [job1] 00:17:17.019 filename=/dev/nvme0n2 00:17:17.019 [job2] 00:17:17.019 filename=/dev/nvme0n3 00:17:17.019 [job3] 00:17:17.019 filename=/dev/nvme0n4 00:17:17.019 Could not set queue depth (nvme0n1) 00:17:17.019 Could not set queue depth (nvme0n2) 00:17:17.019 Could not set queue depth (nvme0n3) 00:17:17.019 Could not set queue depth (nvme0n4) 00:17:17.280 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:17.280 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:17.280 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:17.280 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:17.280 fio-3.35 00:17:17.280 Starting 4 threads 00:17:18.666 00:17:18.666 job0: (groupid=0, jobs=1): err= 0: pid=1945861: Mon Jul 15 21:07:45 2024 00:17:18.666 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:18.666 slat (nsec): min=7671, max=63198, avg=25406.36, stdev=5112.27 00:17:18.666 clat (usec): min=898, max=1452, avg=1200.71, stdev=73.99 00:17:18.666 lat (usec): min=906, max=1478, avg=1226.12, stdev=75.38 00:17:18.666 clat percentiles (usec): 00:17:18.666 | 1.00th=[ 1012], 5.00th=[ 1074], 10.00th=[ 1106], 20.00th=[ 1139], 00:17:18.666 | 30.00th=[ 1172], 40.00th=[ 1188], 50.00th=[ 1205], 60.00th=[ 1221], 00:17:18.666 | 70.00th=[ 1237], 80.00th=[ 1254], 90.00th=[ 1287], 95.00th=[ 1319], 00:17:18.666 | 99.00th=[ 1369], 99.50th=[ 1385], 99.90th=[ 1450], 99.95th=[ 1450], 00:17:18.666 | 99.99th=[ 1450] 00:17:18.666 write: IOPS=516, BW=2066KiB/s (2116kB/s)(2068KiB/1001msec); 0 zone resets 00:17:18.666 slat (usec): min=5, max=1670, avg=25.63, stdev=83.55 00:17:18.666 clat (usec): min=359, max=941, avg=680.56, stdev=95.38 00:17:18.666 lat (usec): min=378, max=2434, avg=706.19, stdev=132.92 00:17:18.666 clat percentiles (usec): 00:17:18.666 | 1.00th=[ 424], 5.00th=[ 510], 10.00th=[ 545], 20.00th=[ 611], 00:17:18.666 | 30.00th=[ 644], 40.00th=[ 668], 50.00th=[ 693], 60.00th=[ 717], 00:17:18.666 | 70.00th=[ 742], 80.00th=[ 758], 90.00th=[ 791], 95.00th=[ 807], 00:17:18.666 | 99.00th=[ 881], 99.50th=[ 889], 99.90th=[ 938], 99.95th=[ 938], 00:17:18.666 | 99.99th=[ 938] 00:17:18.666 bw ( KiB/s): min= 4096, max= 4096, per=40.17%, avg=4096.00, stdev= 0.00, samples=1 00:17:18.666 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:18.666 lat (usec) : 500=2.04%, 750=35.86%, 1000=12.73% 00:17:18.666 lat (msec) : 2=49.37% 00:17:18.666 cpu : usr=1.40%, sys=3.70%, ctx=1033, majf=0, minf=1 00:17:18.666 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:18.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.666 issued rwts: total=512,517,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.666 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:18.666 job1: (groupid=0, jobs=1): err= 0: pid=1945862: Mon Jul 15 21:07:45 2024 00:17:18.666 read: IOPS=16, BW=66.2KiB/s (67.8kB/s)(68.0KiB/1027msec) 00:17:18.666 slat (nsec): min=24813, max=25602, avg=25086.76, stdev=243.74 00:17:18.666 clat (usec): min=1132, max=42982, avg=39892.04, stdev=9998.35 00:17:18.666 lat (usec): min=1157, max=43007, avg=39917.13, stdev=9998.36 00:17:18.666 clat percentiles (usec): 00:17:18.666 | 1.00th=[ 1139], 5.00th=[ 1139], 10.00th=[41681], 20.00th=[41681], 00:17:18.666 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:18.666 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:17:18.666 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:18.666 | 99.99th=[42730] 00:17:18.666 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:17:18.666 slat (nsec): min=9662, max=49424, avg=23358.67, stdev=10932.60 00:17:18.666 clat (usec): min=312, max=996, avg=650.86, stdev=114.26 00:17:18.666 lat (usec): min=346, max=1008, avg=674.22, stdev=114.93 00:17:18.666 clat percentiles (usec): 00:17:18.666 | 1.00th=[ 420], 5.00th=[ 449], 10.00th=[ 515], 20.00th=[ 545], 00:17:18.666 | 30.00th=[ 578], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 693], 00:17:18.666 | 70.00th=[ 717], 80.00th=[ 750], 90.00th=[ 791], 95.00th=[ 832], 00:17:18.666 | 99.00th=[ 898], 99.50th=[ 938], 99.90th=[ 996], 99.95th=[ 996], 00:17:18.666 | 99.99th=[ 996] 00:17:18.667 bw ( KiB/s): min= 4096, max= 4096, per=40.17%, avg=4096.00, stdev= 0.00, samples=1 00:17:18.667 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:18.667 lat (usec) : 500=9.07%, 750=69.94%, 1000=17.77% 00:17:18.667 lat (msec) : 2=0.19%, 50=3.02% 00:17:18.667 cpu : usr=0.68%, sys=1.07%, ctx=532, majf=0, minf=1 00:17:18.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:18.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.667 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:18.667 job2: (groupid=0, jobs=1): err= 0: pid=1945869: Mon Jul 15 21:07:45 2024 00:17:18.667 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:18.667 slat (nsec): min=26243, max=56421, avg=27345.78, stdev=2900.06 00:17:18.667 clat (usec): min=815, max=1401, avg=1118.14, stdev=93.46 00:17:18.667 lat (usec): min=841, max=1428, avg=1145.49, stdev=93.42 00:17:18.667 clat percentiles (usec): 00:17:18.667 | 1.00th=[ 865], 5.00th=[ 938], 10.00th=[ 1004], 20.00th=[ 1057], 00:17:18.667 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:17:18.667 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1237], 95.00th=[ 1270], 00:17:18.667 | 99.00th=[ 1336], 99.50th=[ 1352], 99.90th=[ 1401], 99.95th=[ 1401], 00:17:18.667 | 99.99th=[ 1401] 00:17:18.667 write: IOPS=564, BW=2258KiB/s (2312kB/s)(2260KiB/1001msec); 0 zone resets 00:17:18.667 slat (nsec): min=8881, max=68412, avg=31323.42, stdev=10108.10 00:17:18.667 clat (usec): min=327, max=1005, avg=684.76, stdev=109.03 00:17:18.667 lat (usec): min=337, max=1040, avg=716.09, stdev=110.43 00:17:18.667 clat percentiles (usec): 00:17:18.667 | 1.00th=[ 400], 5.00th=[ 486], 10.00th=[ 545], 20.00th=[ 603], 00:17:18.667 | 30.00th=[ 635], 40.00th=[ 668], 50.00th=[ 693], 60.00th=[ 717], 00:17:18.667 | 70.00th=[ 750], 80.00th=[ 775], 90.00th=[ 816], 95.00th=[ 848], 00:17:18.667 | 99.00th=[ 922], 99.50th=[ 947], 99.90th=[ 1004], 99.95th=[ 1004], 00:17:18.667 | 99.99th=[ 1004] 00:17:18.667 bw ( KiB/s): min= 4096, max= 4096, per=40.17%, avg=4096.00, stdev= 0.00, samples=1 00:17:18.667 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:18.667 lat (usec) : 500=3.25%, 750=34.08%, 1000=19.87% 00:17:18.667 lat (msec) : 2=42.80% 00:17:18.667 cpu : usr=2.40%, sys=4.10%, ctx=1078, majf=0, minf=1 00:17:18.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:18.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.667 issued rwts: total=512,565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:18.667 job3: (groupid=0, jobs=1): err= 0: pid=1945870: Mon Jul 15 21:07:45 2024 00:17:18.667 read: IOPS=545, BW=2182KiB/s (2234kB/s)(2184KiB/1001msec) 00:17:18.667 slat (nsec): min=4362, max=42816, avg=16450.75, stdev=5394.49 00:17:18.667 clat (usec): min=376, max=1117, avg=789.11, stdev=149.96 00:17:18.667 lat (usec): min=384, max=1131, avg=805.56, stdev=150.62 00:17:18.667 clat percentiles (usec): 00:17:18.667 | 1.00th=[ 474], 5.00th=[ 545], 10.00th=[ 603], 20.00th=[ 644], 00:17:18.667 | 30.00th=[ 668], 40.00th=[ 742], 50.00th=[ 816], 60.00th=[ 857], 00:17:18.667 | 70.00th=[ 906], 80.00th=[ 938], 90.00th=[ 971], 95.00th=[ 996], 00:17:18.667 | 99.00th=[ 1029], 99.50th=[ 1057], 99.90th=[ 1123], 99.95th=[ 1123], 00:17:18.667 | 99.99th=[ 1123] 00:17:18.667 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:17:18.667 slat (usec): min=5, max=1449, avg=21.71, stdev=66.90 00:17:18.667 clat (usec): min=141, max=805, avg=518.26, stdev=111.87 00:17:18.667 lat (usec): min=147, max=2128, avg=539.97, stdev=133.76 00:17:18.667 clat percentiles (usec): 00:17:18.667 | 1.00th=[ 265], 5.00th=[ 310], 10.00th=[ 371], 20.00th=[ 416], 00:17:18.667 | 30.00th=[ 469], 40.00th=[ 498], 50.00th=[ 523], 60.00th=[ 553], 00:17:18.667 | 70.00th=[ 586], 80.00th=[ 619], 90.00th=[ 652], 95.00th=[ 685], 00:17:18.667 | 99.00th=[ 734], 99.50th=[ 750], 99.90th=[ 807], 99.95th=[ 807], 00:17:18.667 | 99.99th=[ 807] 00:17:18.667 bw ( KiB/s): min= 4096, max= 4096, per=40.17%, avg=4096.00, stdev= 0.00, samples=1 00:17:18.667 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:18.667 lat (usec) : 250=0.51%, 500=27.07%, 750=51.66%, 1000=19.11% 00:17:18.667 lat (msec) : 2=1.66% 00:17:18.667 cpu : usr=1.60%, sys=2.30%, ctx=1574, majf=0, minf=1 00:17:18.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:18.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.667 issued rwts: total=546,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:18.667 00:17:18.667 Run status group 0 (all jobs): 00:17:18.667 READ: bw=6181KiB/s (6329kB/s), 66.2KiB/s-2182KiB/s (67.8kB/s-2234kB/s), io=6348KiB (6500kB), run=1001-1027msec 00:17:18.667 WRITE: bw=9.96MiB/s (10.4MB/s), 1994KiB/s-4092KiB/s (2042kB/s-4190kB/s), io=10.2MiB (10.7MB), run=1001-1027msec 00:17:18.667 00:17:18.667 Disk stats (read/write): 00:17:18.667 nvme0n1: ios=433/512, merge=0/0, ticks=562/286, in_queue=848, util=86.77% 00:17:18.667 nvme0n2: ios=34/512, merge=0/0, ticks=1362/325, in_queue=1687, util=88.27% 00:17:18.667 nvme0n3: ios=428/512, merge=0/0, ticks=1286/286, in_queue=1572, util=92.28% 00:17:18.667 nvme0n4: ios=578/765, merge=0/0, ticks=609/383, in_queue=992, util=97.11% 00:17:18.667 21:07:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:18.667 [global] 00:17:18.667 thread=1 00:17:18.667 invalidate=1 00:17:18.667 rw=randwrite 00:17:18.667 time_based=1 00:17:18.667 runtime=1 00:17:18.667 ioengine=libaio 00:17:18.667 direct=1 00:17:18.667 bs=4096 00:17:18.667 iodepth=1 00:17:18.667 norandommap=0 00:17:18.667 numjobs=1 00:17:18.667 00:17:18.667 verify_dump=1 00:17:18.667 verify_backlog=512 00:17:18.667 verify_state_save=0 00:17:18.667 do_verify=1 00:17:18.667 verify=crc32c-intel 00:17:18.667 [job0] 00:17:18.667 filename=/dev/nvme0n1 00:17:18.667 [job1] 00:17:18.667 filename=/dev/nvme0n2 00:17:18.667 [job2] 00:17:18.667 filename=/dev/nvme0n3 00:17:18.667 [job3] 00:17:18.667 filename=/dev/nvme0n4 00:17:18.667 Could not set queue depth (nvme0n1) 00:17:18.667 Could not set queue depth (nvme0n2) 00:17:18.667 Could not set queue depth (nvme0n3) 00:17:18.667 Could not set queue depth (nvme0n4) 00:17:18.928 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:18.928 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:18.928 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:18.928 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:18.928 fio-3.35 00:17:18.928 Starting 4 threads 00:17:20.331 00:17:20.331 job0: (groupid=0, jobs=1): err= 0: pid=1946387: Mon Jul 15 21:07:47 2024 00:17:20.331 read: IOPS=18, BW=73.0KiB/s (74.8kB/s)(76.0KiB/1041msec) 00:17:20.331 slat (nsec): min=24697, max=26086, avg=25052.32, stdev=330.25 00:17:20.331 clat (usec): min=997, max=42066, avg=39649.90, stdev=9367.54 00:17:20.331 lat (usec): min=1022, max=42091, avg=39674.95, stdev=9367.61 00:17:20.331 clat percentiles (usec): 00:17:20.331 | 1.00th=[ 996], 5.00th=[ 996], 10.00th=[41157], 20.00th=[41157], 00:17:20.331 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:17:20.331 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:20.331 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:20.331 | 99.99th=[42206] 00:17:20.331 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:17:20.331 slat (nsec): min=9494, max=67534, avg=30823.07, stdev=7069.61 00:17:20.331 clat (usec): min=172, max=911, avg=521.88, stdev=131.42 00:17:20.331 lat (usec): min=200, max=945, avg=552.70, stdev=132.57 00:17:20.331 clat percentiles (usec): 00:17:20.331 | 1.00th=[ 260], 5.00th=[ 293], 10.00th=[ 330], 20.00th=[ 404], 00:17:20.331 | 30.00th=[ 449], 40.00th=[ 498], 50.00th=[ 537], 60.00th=[ 570], 00:17:20.331 | 70.00th=[ 603], 80.00th=[ 635], 90.00th=[ 676], 95.00th=[ 725], 00:17:20.331 | 99.00th=[ 791], 99.50th=[ 848], 99.90th=[ 914], 99.95th=[ 914], 00:17:20.331 | 99.99th=[ 914] 00:17:20.331 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:17:20.331 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:20.331 lat (usec) : 250=0.75%, 500=39.17%, 750=53.48%, 1000=3.20% 00:17:20.331 lat (msec) : 50=3.39% 00:17:20.331 cpu : usr=0.58%, sys=1.73%, ctx=532, majf=0, minf=1 00:17:20.331 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:20.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.331 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.331 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:20.331 job1: (groupid=0, jobs=1): err= 0: pid=1946388: Mon Jul 15 21:07:47 2024 00:17:20.331 read: IOPS=497, BW=1990KiB/s (2038kB/s)(1992KiB/1001msec) 00:17:20.331 slat (nsec): min=7437, max=59018, avg=27087.77, stdev=3558.58 00:17:20.331 clat (usec): min=816, max=42938, avg=1247.14, stdev=2641.31 00:17:20.331 lat (usec): min=844, max=42964, avg=1274.23, stdev=2641.28 00:17:20.331 clat percentiles (usec): 00:17:20.331 | 1.00th=[ 865], 5.00th=[ 930], 10.00th=[ 988], 20.00th=[ 1029], 00:17:20.331 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1106], 00:17:20.331 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1188], 00:17:20.331 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[42730], 99.95th=[42730], 00:17:20.331 | 99.99th=[42730] 00:17:20.331 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:20.331 slat (nsec): min=8839, max=67526, avg=29146.06, stdev=10081.41 00:17:20.331 clat (usec): min=334, max=941, avg=669.36, stdev=111.69 00:17:20.331 lat (usec): min=368, max=973, avg=698.51, stdev=116.44 00:17:20.331 clat percentiles (usec): 00:17:20.331 | 1.00th=[ 392], 5.00th=[ 461], 10.00th=[ 519], 20.00th=[ 578], 00:17:20.331 | 30.00th=[ 627], 40.00th=[ 652], 50.00th=[ 676], 60.00th=[ 709], 00:17:20.331 | 70.00th=[ 734], 80.00th=[ 766], 90.00th=[ 807], 95.00th=[ 832], 00:17:20.331 | 99.00th=[ 898], 99.50th=[ 914], 99.90th=[ 938], 99.95th=[ 938], 00:17:20.331 | 99.99th=[ 938] 00:17:20.331 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:17:20.331 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:20.331 lat (usec) : 500=4.06%, 750=34.55%, 1000=18.42% 00:17:20.331 lat (msec) : 2=42.77%, 50=0.20% 00:17:20.331 cpu : usr=1.90%, sys=4.10%, ctx=1011, majf=0, minf=1 00:17:20.331 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:20.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.331 issued rwts: total=498,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.331 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:20.331 job2: (groupid=0, jobs=1): err= 0: pid=1946389: Mon Jul 15 21:07:47 2024 00:17:20.331 read: IOPS=17, BW=71.8KiB/s (73.5kB/s)(72.0KiB/1003msec) 00:17:20.331 slat (nsec): min=5542, max=13050, avg=7729.44, stdev=2045.09 00:17:20.331 clat (usec): min=40906, max=42760, avg=41946.45, stdev=411.34 00:17:20.331 lat (usec): min=40914, max=42769, avg=41954.18, stdev=411.58 00:17:20.331 clat percentiles (usec): 00:17:20.331 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:17:20.331 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:20.331 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:17:20.331 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:20.331 | 99.99th=[42730] 00:17:20.331 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:17:20.331 slat (nsec): min=4052, max=32057, avg=5775.26, stdev=1520.57 00:17:20.331 clat (usec): min=175, max=807, avg=476.61, stdev=117.71 00:17:20.331 lat (usec): min=182, max=839, avg=482.39, stdev=117.79 00:17:20.331 clat percentiles (usec): 00:17:20.331 | 1.00th=[ 217], 5.00th=[ 269], 10.00th=[ 318], 20.00th=[ 379], 00:17:20.331 | 30.00th=[ 420], 40.00th=[ 449], 50.00th=[ 469], 60.00th=[ 502], 00:17:20.331 | 70.00th=[ 537], 80.00th=[ 586], 90.00th=[ 635], 95.00th=[ 668], 00:17:20.331 | 99.00th=[ 709], 99.50th=[ 742], 99.90th=[ 807], 99.95th=[ 807], 00:17:20.331 | 99.99th=[ 807] 00:17:20.331 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:17:20.331 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:20.331 lat (usec) : 250=3.40%, 500=54.15%, 750=38.68%, 1000=0.38% 00:17:20.331 lat (msec) : 50=3.40% 00:17:20.331 cpu : usr=0.20%, sys=0.20%, ctx=533, majf=0, minf=1 00:17:20.331 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:20.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.331 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.331 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:20.332 job3: (groupid=0, jobs=1): err= 0: pid=1946390: Mon Jul 15 21:07:47 2024 00:17:20.332 read: IOPS=509, BW=2038KiB/s (2087kB/s)(2040KiB/1001msec) 00:17:20.332 slat (nsec): min=10077, max=61585, avg=25832.86, stdev=3534.18 00:17:20.332 clat (usec): min=909, max=1424, avg=1175.21, stdev=79.64 00:17:20.332 lat (usec): min=935, max=1449, avg=1201.04, stdev=79.42 00:17:20.332 clat percentiles (usec): 00:17:20.332 | 1.00th=[ 996], 5.00th=[ 1045], 10.00th=[ 1074], 20.00th=[ 1106], 00:17:20.332 | 30.00th=[ 1139], 40.00th=[ 1156], 50.00th=[ 1172], 60.00th=[ 1188], 00:17:20.332 | 70.00th=[ 1221], 80.00th=[ 1237], 90.00th=[ 1270], 95.00th=[ 1303], 00:17:20.332 | 99.00th=[ 1369], 99.50th=[ 1401], 99.90th=[ 1418], 99.95th=[ 1418], 00:17:20.332 | 99.99th=[ 1418] 00:17:20.332 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:20.332 slat (nsec): min=9245, max=50556, avg=28700.35, stdev=8571.21 00:17:20.332 clat (usec): min=322, max=1667, avg=712.14, stdev=136.35 00:17:20.332 lat (usec): min=333, max=1710, avg=740.84, stdev=139.54 00:17:20.332 clat percentiles (usec): 00:17:20.332 | 1.00th=[ 412], 5.00th=[ 474], 10.00th=[ 529], 20.00th=[ 611], 00:17:20.332 | 30.00th=[ 652], 40.00th=[ 685], 50.00th=[ 717], 60.00th=[ 750], 00:17:20.332 | 70.00th=[ 783], 80.00th=[ 824], 90.00th=[ 865], 95.00th=[ 922], 00:17:20.332 | 99.00th=[ 988], 99.50th=[ 1004], 99.90th=[ 1663], 99.95th=[ 1663], 00:17:20.332 | 99.99th=[ 1663] 00:17:20.332 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:17:20.332 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:20.332 lat (usec) : 500=2.94%, 750=26.81%, 1000=20.74% 00:17:20.332 lat (msec) : 2=49.51% 00:17:20.332 cpu : usr=1.30%, sys=3.20%, ctx=1022, majf=0, minf=1 00:17:20.332 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:20.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.332 issued rwts: total=510,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.332 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:20.332 00:17:20.332 Run status group 0 (all jobs): 00:17:20.332 READ: bw=4015KiB/s (4112kB/s), 71.8KiB/s-2038KiB/s (73.5kB/s-2087kB/s), io=4180KiB (4280kB), run=1001-1041msec 00:17:20.332 WRITE: bw=7869KiB/s (8058kB/s), 1967KiB/s-2046KiB/s (2015kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1041msec 00:17:20.332 00:17:20.332 Disk stats (read/write): 00:17:20.332 nvme0n1: ios=69/512, merge=0/0, ticks=877/250, in_queue=1127, util=88.37% 00:17:20.332 nvme0n2: ios=339/512, merge=0/0, ticks=1059/296, in_queue=1355, util=89.28% 00:17:20.332 nvme0n3: ios=38/512, merge=0/0, ticks=1341/240, in_queue=1581, util=95.95% 00:17:20.332 nvme0n4: ios=370/512, merge=0/0, ticks=609/344, in_queue=953, util=97.80% 00:17:20.332 21:07:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:20.332 [global] 00:17:20.332 thread=1 00:17:20.332 invalidate=1 00:17:20.332 rw=write 00:17:20.332 time_based=1 00:17:20.332 runtime=1 00:17:20.332 ioengine=libaio 00:17:20.332 direct=1 00:17:20.332 bs=4096 00:17:20.332 iodepth=128 00:17:20.332 norandommap=0 00:17:20.332 numjobs=1 00:17:20.332 00:17:20.332 verify_dump=1 00:17:20.332 verify_backlog=512 00:17:20.332 verify_state_save=0 00:17:20.332 do_verify=1 00:17:20.332 verify=crc32c-intel 00:17:20.332 [job0] 00:17:20.332 filename=/dev/nvme0n1 00:17:20.332 [job1] 00:17:20.332 filename=/dev/nvme0n2 00:17:20.332 [job2] 00:17:20.332 filename=/dev/nvme0n3 00:17:20.332 [job3] 00:17:20.332 filename=/dev/nvme0n4 00:17:20.332 Could not set queue depth (nvme0n1) 00:17:20.332 Could not set queue depth (nvme0n2) 00:17:20.332 Could not set queue depth (nvme0n3) 00:17:20.332 Could not set queue depth (nvme0n4) 00:17:20.598 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:20.598 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:20.598 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:20.598 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:20.598 fio-3.35 00:17:20.598 Starting 4 threads 00:17:22.012 00:17:22.012 job0: (groupid=0, jobs=1): err= 0: pid=1946915: Mon Jul 15 21:07:48 2024 00:17:22.012 read: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(32.0MiB/1004msec) 00:17:22.012 slat (nsec): min=906, max=7096.4k, avg=63831.92, stdev=460038.27 00:17:22.012 clat (usec): min=3215, max=14884, avg=8357.01, stdev=1874.67 00:17:22.012 lat (usec): min=3222, max=14886, avg=8420.84, stdev=1899.75 00:17:22.012 clat percentiles (usec): 00:17:22.012 | 1.00th=[ 4359], 5.00th=[ 5669], 10.00th=[ 6521], 20.00th=[ 7046], 00:17:22.012 | 30.00th=[ 7504], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8291], 00:17:22.012 | 70.00th=[ 8717], 80.00th=[ 9503], 90.00th=[11207], 95.00th=[12387], 00:17:22.012 | 99.00th=[13566], 99.50th=[13960], 99.90th=[14353], 99.95th=[14877], 00:17:22.012 | 99.99th=[14877] 00:17:22.012 write: IOPS=8258, BW=32.3MiB/s (33.8MB/s)(32.4MiB/1004msec); 0 zone resets 00:17:22.012 slat (nsec): min=1648, max=16313k, avg=52639.33, stdev=332677.98 00:17:22.012 clat (usec): min=1713, max=21964, avg=6852.38, stdev=1697.53 00:17:22.012 lat (usec): min=1721, max=21974, avg=6905.02, stdev=1706.33 00:17:22.012 clat percentiles (usec): 00:17:22.012 | 1.00th=[ 2606], 5.00th=[ 3490], 10.00th=[ 4228], 20.00th=[ 5407], 00:17:22.012 | 30.00th=[ 6325], 40.00th=[ 7111], 50.00th=[ 7373], 60.00th=[ 7504], 00:17:22.012 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 8094], 95.00th=[ 9110], 00:17:22.012 | 99.00th=[10945], 99.50th=[11338], 99.90th=[14877], 99.95th=[17433], 00:17:22.012 | 99.99th=[21890] 00:17:22.012 bw ( KiB/s): min=32768, max=33048, per=28.40%, avg=32908.00, stdev=197.99, samples=2 00:17:22.012 iops : min= 8192, max= 8262, avg=8227.00, stdev=49.50, samples=2 00:17:22.012 lat (msec) : 2=0.06%, 4=3.87%, 10=86.16%, 20=9.91%, 50=0.01% 00:17:22.012 cpu : usr=5.78%, sys=7.68%, ctx=820, majf=0, minf=1 00:17:22.012 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:22.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:22.012 issued rwts: total=8192,8292,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.012 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:22.012 job1: (groupid=0, jobs=1): err= 0: pid=1946916: Mon Jul 15 21:07:48 2024 00:17:22.012 read: IOPS=7775, BW=30.4MiB/s (31.8MB/s)(30.5MiB/1004msec) 00:17:22.012 slat (nsec): min=961, max=7364.6k, avg=64733.08, stdev=460048.66 00:17:22.012 clat (usec): min=2431, max=15065, avg=8523.85, stdev=1886.60 00:17:22.012 lat (usec): min=3594, max=15614, avg=8588.58, stdev=1907.92 00:17:22.012 clat percentiles (usec): 00:17:22.012 | 1.00th=[ 4817], 5.00th=[ 5997], 10.00th=[ 6587], 20.00th=[ 7046], 00:17:22.012 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 8455], 00:17:22.012 | 70.00th=[ 8979], 80.00th=[10028], 90.00th=[11469], 95.00th=[12387], 00:17:22.012 | 99.00th=[13698], 99.50th=[13960], 99.90th=[14484], 99.95th=[14877], 00:17:22.012 | 99.99th=[15008] 00:17:22.012 write: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(32.0MiB/1004msec); 0 zone resets 00:17:22.012 slat (nsec): min=1690, max=37545k, avg=55369.85, stdev=519482.60 00:17:22.012 clat (usec): min=1182, max=44163, avg=7411.12, stdev=4300.82 00:17:22.012 lat (usec): min=1228, max=44171, avg=7466.49, stdev=4314.78 00:17:22.012 clat percentiles (usec): 00:17:22.012 | 1.00th=[ 2933], 5.00th=[ 3884], 10.00th=[ 4490], 20.00th=[ 5276], 00:17:22.012 | 30.00th=[ 6390], 40.00th=[ 7111], 50.00th=[ 7373], 60.00th=[ 7570], 00:17:22.012 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8717], 95.00th=[10421], 00:17:22.012 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:17:22.012 | 99.99th=[44303] 00:17:22.012 bw ( KiB/s): min=32752, max=32784, per=28.28%, avg=32768.00, stdev=22.63, samples=2 00:17:22.012 iops : min= 8188, max= 8196, avg=8192.00, stdev= 5.66, samples=2 00:17:22.012 lat (msec) : 2=0.01%, 4=3.11%, 10=83.53%, 20=12.56%, 50=0.79% 00:17:22.012 cpu : usr=6.28%, sys=7.48%, ctx=654, majf=0, minf=1 00:17:22.012 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:22.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:22.012 issued rwts: total=7807,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.012 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:22.013 job2: (groupid=0, jobs=1): err= 0: pid=1946917: Mon Jul 15 21:07:48 2024 00:17:22.013 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:17:22.013 slat (nsec): min=963, max=11133k, avg=106369.43, stdev=780205.94 00:17:22.013 clat (usec): min=4213, max=23197, avg=12878.14, stdev=3112.16 00:17:22.013 lat (usec): min=4219, max=23206, avg=12984.50, stdev=3162.35 00:17:22.013 clat percentiles (usec): 00:17:22.013 | 1.00th=[ 5211], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[11076], 00:17:22.013 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:17:22.013 | 70.00th=[12518], 80.00th=[15008], 90.00th=[17957], 95.00th=[19530], 00:17:22.013 | 99.00th=[21890], 99.50th=[22152], 99.90th=[22676], 99.95th=[23200], 00:17:22.013 | 99.99th=[23200] 00:17:22.013 write: IOPS=5524, BW=21.6MiB/s (22.6MB/s)(21.8MiB/1009msec); 0 zone resets 00:17:22.013 slat (nsec): min=1693, max=11444k, avg=77202.68, stdev=368637.30 00:17:22.013 clat (usec): min=1174, max=22456, avg=10861.18, stdev=2554.08 00:17:22.013 lat (usec): min=1184, max=22461, avg=10938.38, stdev=2566.55 00:17:22.013 clat percentiles (usec): 00:17:22.013 | 1.00th=[ 3326], 5.00th=[ 5735], 10.00th=[ 7046], 20.00th=[ 8848], 00:17:22.013 | 30.00th=[10814], 40.00th=[11469], 50.00th=[11863], 60.00th=[11994], 00:17:22.013 | 70.00th=[12125], 80.00th=[12125], 90.00th=[12387], 95.00th=[13960], 00:17:22.013 | 99.00th=[17171], 99.50th=[19268], 99.90th=[22152], 99.95th=[22414], 00:17:22.013 | 99.99th=[22414] 00:17:22.013 bw ( KiB/s): min=21616, max=21960, per=18.81%, avg=21788.00, stdev=243.24, samples=2 00:17:22.013 iops : min= 5404, max= 5490, avg=5447.00, stdev=60.81, samples=2 00:17:22.013 lat (msec) : 2=0.11%, 4=0.68%, 10=17.10%, 20=79.80%, 50=2.30% 00:17:22.013 cpu : usr=3.57%, sys=4.96%, ctx=667, majf=0, minf=1 00:17:22.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:22.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:22.013 issued rwts: total=5120,5574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.013 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:22.013 job3: (groupid=0, jobs=1): err= 0: pid=1946918: Mon Jul 15 21:07:48 2024 00:17:22.013 read: IOPS=7068, BW=27.6MiB/s (29.0MB/s)(27.7MiB/1004msec) 00:17:22.013 slat (nsec): min=886, max=4823.4k, avg=71964.90, stdev=450815.89 00:17:22.013 clat (usec): min=1349, max=14111, avg=9098.00, stdev=1129.55 00:17:22.013 lat (usec): min=5307, max=14469, avg=9169.97, stdev=1179.05 00:17:22.013 clat percentiles (usec): 00:17:22.013 | 1.00th=[ 6063], 5.00th=[ 6980], 10.00th=[ 7701], 20.00th=[ 8586], 00:17:22.013 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9241], 00:17:22.013 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10421], 95.00th=[10945], 00:17:22.013 | 99.00th=[12387], 99.50th=[12911], 99.90th=[13698], 99.95th=[13829], 00:17:22.013 | 99.99th=[14091] 00:17:22.013 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:17:22.013 slat (nsec): min=1513, max=4492.2k, avg=64337.62, stdev=310527.30 00:17:22.013 clat (usec): min=3567, max=13681, avg=8728.97, stdev=1207.57 00:17:22.013 lat (usec): min=3575, max=13693, avg=8793.31, stdev=1229.11 00:17:22.013 clat percentiles (usec): 00:17:22.013 | 1.00th=[ 4883], 5.00th=[ 5932], 10.00th=[ 7308], 20.00th=[ 8356], 00:17:22.013 | 30.00th=[ 8717], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:17:22.013 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9634], 95.00th=[10552], 00:17:22.013 | 99.00th=[12256], 99.50th=[12387], 99.90th=[13173], 99.95th=[13566], 00:17:22.013 | 99.99th=[13698] 00:17:22.013 bw ( KiB/s): min=28592, max=28752, per=24.75%, avg=28672.00, stdev=113.14, samples=2 00:17:22.013 iops : min= 7148, max= 7188, avg=7168.00, stdev=28.28, samples=2 00:17:22.013 lat (msec) : 2=0.01%, 4=0.05%, 10=89.26%, 20=10.68% 00:17:22.013 cpu : usr=5.48%, sys=4.69%, ctx=848, majf=0, minf=1 00:17:22.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:22.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:22.013 issued rwts: total=7097,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.013 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:22.013 00:17:22.013 Run status group 0 (all jobs): 00:17:22.013 READ: bw=109MiB/s (115MB/s), 19.8MiB/s-31.9MiB/s (20.8MB/s-33.4MB/s), io=110MiB (116MB), run=1004-1009msec 00:17:22.013 WRITE: bw=113MiB/s (119MB/s), 21.6MiB/s-32.3MiB/s (22.6MB/s-33.8MB/s), io=114MiB (120MB), run=1004-1009msec 00:17:22.013 00:17:22.013 Disk stats (read/write): 00:17:22.013 nvme0n1: ios=6688/7020, merge=0/0, ticks=54020/45812, in_queue=99832, util=97.80% 00:17:22.013 nvme0n2: ios=6565/6656, merge=0/0, ticks=53068/43564, in_queue=96632, util=96.13% 00:17:22.013 nvme0n3: ios=4271/4608, merge=0/0, ticks=53761/48496, in_queue=102257, util=97.05% 00:17:22.013 nvme0n4: ios=5841/6144, merge=0/0, ticks=26066/24476, in_queue=50542, util=90.07% 00:17:22.013 21:07:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:22.013 [global] 00:17:22.013 thread=1 00:17:22.013 invalidate=1 00:17:22.013 rw=randwrite 00:17:22.013 time_based=1 00:17:22.013 runtime=1 00:17:22.013 ioengine=libaio 00:17:22.013 direct=1 00:17:22.013 bs=4096 00:17:22.013 iodepth=128 00:17:22.013 norandommap=0 00:17:22.013 numjobs=1 00:17:22.013 00:17:22.013 verify_dump=1 00:17:22.013 verify_backlog=512 00:17:22.013 verify_state_save=0 00:17:22.013 do_verify=1 00:17:22.013 verify=crc32c-intel 00:17:22.013 [job0] 00:17:22.013 filename=/dev/nvme0n1 00:17:22.013 [job1] 00:17:22.013 filename=/dev/nvme0n2 00:17:22.013 [job2] 00:17:22.013 filename=/dev/nvme0n3 00:17:22.013 [job3] 00:17:22.013 filename=/dev/nvme0n4 00:17:22.013 Could not set queue depth (nvme0n1) 00:17:22.013 Could not set queue depth (nvme0n2) 00:17:22.013 Could not set queue depth (nvme0n3) 00:17:22.013 Could not set queue depth (nvme0n4) 00:17:22.281 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:22.281 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:22.281 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:22.281 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:22.281 fio-3.35 00:17:22.281 Starting 4 threads 00:17:23.685 00:17:23.685 job0: (groupid=0, jobs=1): err= 0: pid=1947432: Mon Jul 15 21:07:50 2024 00:17:23.685 read: IOPS=4175, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1006msec) 00:17:23.686 slat (nsec): min=890, max=45496k, avg=124520.51, stdev=1176122.29 00:17:23.686 clat (usec): min=2549, max=86336, avg=15853.21, stdev=14697.95 00:17:23.686 lat (usec): min=2555, max=87988, avg=15977.73, stdev=14781.45 00:17:23.686 clat percentiles (usec): 00:17:23.686 | 1.00th=[ 2704], 5.00th=[ 3916], 10.00th=[ 4817], 20.00th=[ 5735], 00:17:23.686 | 30.00th=[ 7242], 40.00th=[ 8225], 50.00th=[10552], 60.00th=[13173], 00:17:23.686 | 70.00th=[16581], 80.00th=[23200], 90.00th=[38011], 95.00th=[42730], 00:17:23.686 | 99.00th=[77071], 99.50th=[86508], 99.90th=[86508], 99.95th=[86508], 00:17:23.686 | 99.99th=[86508] 00:17:23.686 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:17:23.686 slat (nsec): min=1548, max=16617k, avg=99410.99, stdev=655661.65 00:17:23.686 clat (usec): min=1057, max=59450, avg=13240.43, stdev=11983.79 00:17:23.686 lat (usec): min=1115, max=59454, avg=13339.85, stdev=12052.25 00:17:23.686 clat percentiles (usec): 00:17:23.686 | 1.00th=[ 2147], 5.00th=[ 2769], 10.00th=[ 3916], 20.00th=[ 4883], 00:17:23.686 | 30.00th=[ 6521], 40.00th=[ 7898], 50.00th=[ 8848], 60.00th=[10552], 00:17:23.686 | 70.00th=[12387], 80.00th=[18482], 90.00th=[30278], 95.00th=[42206], 00:17:23.686 | 99.00th=[55313], 99.50th=[57410], 99.90th=[59507], 99.95th=[59507], 00:17:23.686 | 99.99th=[59507] 00:17:23.686 bw ( KiB/s): min=17648, max=19032, per=25.63%, avg=18340.00, stdev=978.64, samples=2 00:17:23.686 iops : min= 4412, max= 4758, avg=4585.00, stdev=244.66, samples=2 00:17:23.686 lat (msec) : 2=0.37%, 4=7.45%, 10=44.23%, 20=27.43%, 50=17.31% 00:17:23.686 lat (msec) : 100=3.21% 00:17:23.686 cpu : usr=2.49%, sys=4.08%, ctx=384, majf=0, minf=1 00:17:23.686 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:23.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:23.686 issued rwts: total=4201,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.686 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:23.686 job1: (groupid=0, jobs=1): err= 0: pid=1947433: Mon Jul 15 21:07:50 2024 00:17:23.686 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:17:23.686 slat (nsec): min=892, max=32328k, avg=72588.22, stdev=723497.85 00:17:23.686 clat (usec): min=2180, max=53317, avg=11344.81, stdev=8323.03 00:17:23.686 lat (usec): min=2182, max=61093, avg=11417.39, stdev=8369.65 00:17:23.686 clat percentiles (usec): 00:17:23.686 | 1.00th=[ 2966], 5.00th=[ 3884], 10.00th=[ 4686], 20.00th=[ 5473], 00:17:23.686 | 30.00th=[ 5800], 40.00th=[ 6849], 50.00th=[ 8455], 60.00th=[11731], 00:17:23.686 | 70.00th=[13829], 80.00th=[16450], 90.00th=[19792], 95.00th=[27657], 00:17:23.686 | 99.00th=[53216], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:17:23.686 | 99.99th=[53216] 00:17:23.686 write: IOPS=5984, BW=23.4MiB/s (24.5MB/s)(23.5MiB/1004msec); 0 zone resets 00:17:23.686 slat (nsec): min=1525, max=23881k, avg=80447.20, stdev=611599.95 00:17:23.686 clat (usec): min=445, max=79857, avg=11461.12, stdev=11084.37 00:17:23.686 lat (usec): min=447, max=79865, avg=11541.57, stdev=11142.99 00:17:23.686 clat percentiles (usec): 00:17:23.686 | 1.00th=[ 1020], 5.00th=[ 2212], 10.00th=[ 2802], 20.00th=[ 4146], 00:17:23.686 | 30.00th=[ 5473], 40.00th=[ 6718], 50.00th=[ 8225], 60.00th=[ 9765], 00:17:23.686 | 70.00th=[11863], 80.00th=[17433], 90.00th=[23725], 95.00th=[32113], 00:17:23.686 | 99.00th=[68682], 99.50th=[76022], 99.90th=[78119], 99.95th=[80217], 00:17:23.686 | 99.99th=[80217] 00:17:23.686 bw ( KiB/s): min=14768, max=32280, per=32.88%, avg=23524.00, stdev=12382.85, samples=2 00:17:23.686 iops : min= 3692, max= 8070, avg=5881.00, stdev=3095.71, samples=2 00:17:23.686 lat (usec) : 500=0.02%, 750=0.05%, 1000=0.43% 00:17:23.686 lat (msec) : 2=1.51%, 4=10.96%, 10=46.59%, 20=28.17%, 50=10.77% 00:17:23.686 lat (msec) : 100=1.49% 00:17:23.686 cpu : usr=4.19%, sys=6.28%, ctx=490, majf=0, minf=1 00:17:23.686 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:23.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:23.686 issued rwts: total=5120,6008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.686 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:23.686 job2: (groupid=0, jobs=1): err= 0: pid=1947436: Mon Jul 15 21:07:50 2024 00:17:23.686 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:17:23.686 slat (nsec): min=931, max=21244k, avg=198765.88, stdev=1204932.57 00:17:23.686 clat (usec): min=8627, max=60088, avg=24623.09, stdev=9822.17 00:17:23.686 lat (usec): min=8634, max=60116, avg=24821.86, stdev=9939.21 00:17:23.686 clat percentiles (usec): 00:17:23.686 | 1.00th=[10028], 5.00th=[12256], 10.00th=[15139], 20.00th=[16581], 00:17:23.686 | 30.00th=[17171], 40.00th=[18744], 50.00th=[21365], 60.00th=[25560], 00:17:23.686 | 70.00th=[28443], 80.00th=[33817], 90.00th=[40633], 95.00th=[43779], 00:17:23.686 | 99.00th=[46400], 99.50th=[48497], 99.90th=[52167], 99.95th=[55837], 00:17:23.686 | 99.99th=[60031] 00:17:23.686 write: IOPS=2700, BW=10.5MiB/s (11.1MB/s)(10.6MiB/1007msec); 0 zone resets 00:17:23.686 slat (nsec): min=1598, max=19712k, avg=174865.54, stdev=937130.94 00:17:23.686 clat (usec): min=5337, max=65612, avg=23722.70, stdev=14683.63 00:17:23.686 lat (usec): min=5348, max=65619, avg=23897.57, stdev=14770.98 00:17:23.686 clat percentiles (usec): 00:17:23.686 | 1.00th=[ 7046], 5.00th=[ 7898], 10.00th=[ 9372], 20.00th=[11338], 00:17:23.686 | 30.00th=[13304], 40.00th=[15008], 50.00th=[20317], 60.00th=[23725], 00:17:23.686 | 70.00th=[26346], 80.00th=[34866], 90.00th=[47449], 95.00th=[57410], 00:17:23.686 | 99.00th=[63701], 99.50th=[64226], 99.90th=[65799], 99.95th=[65799], 00:17:23.686 | 99.99th=[65799] 00:17:23.686 bw ( KiB/s): min= 9056, max=11680, per=14.49%, avg=10368.00, stdev=1855.45, samples=2 00:17:23.686 iops : min= 2264, max= 2920, avg=2592.00, stdev=463.86, samples=2 00:17:23.686 lat (msec) : 10=8.45%, 20=37.68%, 50=49.18%, 100=4.70% 00:17:23.686 cpu : usr=1.89%, sys=3.58%, ctx=258, majf=0, minf=2 00:17:23.686 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:17:23.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:23.686 issued rwts: total=2560,2719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.686 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:23.686 job3: (groupid=0, jobs=1): err= 0: pid=1947437: Mon Jul 15 21:07:50 2024 00:17:23.686 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:17:23.686 slat (nsec): min=995, max=11857k, avg=99133.34, stdev=651661.83 00:17:23.686 clat (usec): min=4403, max=43749, avg=12118.83, stdev=5888.46 00:17:23.686 lat (usec): min=4410, max=43753, avg=12217.96, stdev=5941.14 00:17:23.686 clat percentiles (usec): 00:17:23.686 | 1.00th=[ 6325], 5.00th=[ 7111], 10.00th=[ 7701], 20.00th=[ 8225], 00:17:23.686 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[10945], 60.00th=[11469], 00:17:23.686 | 70.00th=[12780], 80.00th=[13698], 90.00th=[17695], 95.00th=[23987], 00:17:23.686 | 99.00th=[36963], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:17:23.686 | 99.99th=[43779] 00:17:23.686 write: IOPS=4670, BW=18.2MiB/s (19.1MB/s)(18.4MiB/1009msec); 0 zone resets 00:17:23.686 slat (nsec): min=1608, max=10992k, avg=109708.06, stdev=594749.62 00:17:23.686 clat (usec): min=1183, max=53279, avg=15300.52, stdev=12449.64 00:17:23.686 lat (usec): min=1194, max=53288, avg=15410.23, stdev=12534.01 00:17:23.686 clat percentiles (usec): 00:17:23.686 | 1.00th=[ 3064], 5.00th=[ 4359], 10.00th=[ 4948], 20.00th=[ 6325], 00:17:23.686 | 30.00th=[ 7046], 40.00th=[ 8225], 50.00th=[ 9896], 60.00th=[11338], 00:17:23.686 | 70.00th=[16450], 80.00th=[25297], 90.00th=[38011], 95.00th=[42730], 00:17:23.686 | 99.00th=[50070], 99.50th=[51643], 99.90th=[53216], 99.95th=[53216], 00:17:23.686 | 99.99th=[53216] 00:17:23.686 bw ( KiB/s): min=13832, max=23088, per=25.80%, avg=18460.00, stdev=6544.98, samples=2 00:17:23.686 iops : min= 3458, max= 5772, avg=4615.00, stdev=1636.25, samples=2 00:17:23.686 lat (msec) : 2=0.02%, 4=1.03%, 10=47.43%, 20=34.25%, 50=16.74% 00:17:23.686 lat (msec) : 100=0.54% 00:17:23.686 cpu : usr=2.58%, sys=5.95%, ctx=383, majf=0, minf=1 00:17:23.686 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:23.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:23.686 issued rwts: total=4608,4713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.686 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:23.686 00:17:23.686 Run status group 0 (all jobs): 00:17:23.686 READ: bw=63.8MiB/s (66.9MB/s), 9.93MiB/s-19.9MiB/s (10.4MB/s-20.9MB/s), io=64.4MiB (67.5MB), run=1004-1009msec 00:17:23.686 WRITE: bw=69.9MiB/s (73.3MB/s), 10.5MiB/s-23.4MiB/s (11.1MB/s-24.5MB/s), io=70.5MiB (73.9MB), run=1004-1009msec 00:17:23.686 00:17:23.686 Disk stats (read/write): 00:17:23.686 nvme0n1: ios=3601/4060, merge=0/0, ticks=28102/29928, in_queue=58030, util=82.87% 00:17:23.686 nvme0n2: ios=4650/5247, merge=0/0, ticks=38333/42905, in_queue=81238, util=88.70% 00:17:23.686 nvme0n3: ios=2049/2055, merge=0/0, ticks=27100/25287, in_queue=52387, util=92.41% 00:17:23.686 nvme0n4: ios=4145/4215, merge=0/0, ticks=45537/54375, in_queue=99912, util=97.01% 00:17:23.686 21:07:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:23.686 21:07:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1947629 00:17:23.686 21:07:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:23.686 21:07:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:23.686 [global] 00:17:23.686 thread=1 00:17:23.686 invalidate=1 00:17:23.686 rw=read 00:17:23.686 time_based=1 00:17:23.686 runtime=10 00:17:23.686 ioengine=libaio 00:17:23.686 direct=1 00:17:23.686 bs=4096 00:17:23.686 iodepth=1 00:17:23.686 norandommap=1 00:17:23.686 numjobs=1 00:17:23.686 00:17:23.686 [job0] 00:17:23.686 filename=/dev/nvme0n1 00:17:23.686 [job1] 00:17:23.686 filename=/dev/nvme0n2 00:17:23.686 [job2] 00:17:23.686 filename=/dev/nvme0n3 00:17:23.686 [job3] 00:17:23.686 filename=/dev/nvme0n4 00:17:23.686 Could not set queue depth (nvme0n1) 00:17:23.686 Could not set queue depth (nvme0n2) 00:17:23.686 Could not set queue depth (nvme0n3) 00:17:23.686 Could not set queue depth (nvme0n4) 00:17:23.946 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.946 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.946 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.946 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.946 fio-3.35 00:17:23.946 Starting 4 threads 00:17:26.487 21:07:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:26.487 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=8658944, buflen=4096 00:17:26.487 fio: pid=1947963, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:26.746 21:07:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:26.746 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=24469504, buflen=4096 00:17:26.746 fio: pid=1947962, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:26.746 21:07:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:26.746 21:07:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:27.006 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=6848512, buflen=4096 00:17:27.006 fio: pid=1947960, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:27.006 21:07:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:27.006 21:07:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:27.006 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=21676032, buflen=4096 00:17:27.006 fio: pid=1947961, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:27.265 21:07:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:27.265 21:07:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:27.265 00:17:27.265 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1947960: Mon Jul 15 21:07:54 2024 00:17:27.265 read: IOPS=573, BW=2294KiB/s (2349kB/s)(6688KiB/2915msec) 00:17:27.265 slat (usec): min=6, max=20687, avg=44.61, stdev=625.40 00:17:27.265 clat (usec): min=154, max=42987, avg=1688.96, stdev=5818.47 00:17:27.265 lat (usec): min=161, max=43011, avg=1733.58, stdev=5849.22 00:17:27.265 clat percentiles (usec): 00:17:27.265 | 1.00th=[ 229], 5.00th=[ 627], 10.00th=[ 709], 20.00th=[ 783], 00:17:27.265 | 30.00th=[ 816], 40.00th=[ 840], 50.00th=[ 857], 60.00th=[ 873], 00:17:27.265 | 70.00th=[ 898], 80.00th=[ 930], 90.00th=[ 988], 95.00th=[ 1106], 00:17:27.265 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:17:27.265 | 99.99th=[42730] 00:17:27.265 bw ( KiB/s): min= 96, max= 4608, per=10.20%, avg=1996.80, stdev=2292.46, samples=5 00:17:27.265 iops : min= 24, max= 1152, avg=499.20, stdev=573.12, samples=5 00:17:27.265 lat (usec) : 250=1.43%, 500=1.97%, 750=11.66%, 1000=76.27% 00:17:27.265 lat (msec) : 2=6.46%, 20=0.12%, 50=2.03% 00:17:27.265 cpu : usr=0.55%, sys=1.65%, ctx=1676, majf=0, minf=1 00:17:27.265 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:27.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.265 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.265 issued rwts: total=1673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.265 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:27.265 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1947961: Mon Jul 15 21:07:54 2024 00:17:27.265 read: IOPS=1720, BW=6879KiB/s (7045kB/s)(20.7MiB/3077msec) 00:17:27.265 slat (usec): min=6, max=18655, avg=38.42, stdev=500.14 00:17:27.265 clat (usec): min=131, max=41181, avg=537.50, stdev=612.87 00:17:27.265 lat (usec): min=151, max=41208, avg=575.92, stdev=787.53 00:17:27.265 clat percentiles (usec): 00:17:27.265 | 1.00th=[ 180], 5.00th=[ 225], 10.00th=[ 247], 20.00th=[ 310], 00:17:27.266 | 30.00th=[ 347], 40.00th=[ 379], 50.00th=[ 445], 60.00th=[ 537], 00:17:27.266 | 70.00th=[ 701], 80.00th=[ 799], 90.00th=[ 914], 95.00th=[ 979], 00:17:27.266 | 99.00th=[ 1045], 99.50th=[ 1057], 99.90th=[ 1123], 99.95th=[ 1156], 00:17:27.266 | 99.99th=[41157] 00:17:27.266 bw ( KiB/s): min= 4176, max=10688, per=32.49%, avg=6358.40, stdev=2595.94, samples=5 00:17:27.266 iops : min= 1044, max= 2672, avg=1589.60, stdev=648.99, samples=5 00:17:27.266 lat (usec) : 250=11.35%, 500=47.16%, 750=15.96%, 1000=22.09% 00:17:27.266 lat (msec) : 2=3.40%, 50=0.02% 00:17:27.266 cpu : usr=1.82%, sys=5.23%, ctx=5300, majf=0, minf=1 00:17:27.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:27.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.266 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.266 issued rwts: total=5293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:27.266 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1947962: Mon Jul 15 21:07:54 2024 00:17:27.266 read: IOPS=2179, BW=8715KiB/s (8924kB/s)(23.3MiB/2742msec) 00:17:27.266 slat (usec): min=6, max=12567, avg=21.71, stdev=162.58 00:17:27.266 clat (usec): min=138, max=42425, avg=432.47, stdev=958.99 00:17:27.266 lat (usec): min=144, max=54992, avg=454.18, stdev=1060.15 00:17:27.266 clat percentiles (usec): 00:17:27.266 | 1.00th=[ 161], 5.00th=[ 198], 10.00th=[ 223], 20.00th=[ 249], 00:17:27.266 | 30.00th=[ 285], 40.00th=[ 326], 50.00th=[ 351], 60.00th=[ 367], 00:17:27.266 | 70.00th=[ 392], 80.00th=[ 441], 90.00th=[ 865], 95.00th=[ 922], 00:17:27.266 | 99.00th=[ 1037], 99.50th=[ 1090], 99.90th=[ 1287], 99.95th=[41681], 00:17:27.266 | 99.99th=[42206] 00:17:27.266 bw ( KiB/s): min= 4344, max=11448, per=47.56%, avg=9307.20, stdev=2926.43, samples=5 00:17:27.266 iops : min= 1086, max= 2862, avg=2326.80, stdev=731.61, samples=5 00:17:27.266 lat (usec) : 250=20.17%, 500=63.16%, 750=1.36%, 1000=13.47% 00:17:27.266 lat (msec) : 2=1.74%, 4=0.03%, 50=0.05% 00:17:27.266 cpu : usr=1.86%, sys=5.14%, ctx=5976, majf=0, minf=1 00:17:27.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:27.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.266 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.266 issued rwts: total=5975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:27.266 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1947963: Mon Jul 15 21:07:54 2024 00:17:27.266 read: IOPS=826, BW=3304KiB/s (3384kB/s)(8456KiB/2559msec) 00:17:27.266 slat (nsec): min=6651, max=63002, avg=24920.07, stdev=6978.63 00:17:27.266 clat (usec): min=367, max=42914, avg=1179.16, stdev=3983.96 00:17:27.266 lat (usec): min=395, max=42940, avg=1204.08, stdev=3984.07 00:17:27.266 clat percentiles (usec): 00:17:27.266 | 1.00th=[ 506], 5.00th=[ 594], 10.00th=[ 635], 20.00th=[ 693], 00:17:27.266 | 30.00th=[ 734], 40.00th=[ 766], 50.00th=[ 799], 60.00th=[ 832], 00:17:27.266 | 70.00th=[ 857], 80.00th=[ 881], 90.00th=[ 930], 95.00th=[ 979], 00:17:27.266 | 99.00th=[ 1844], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:17:27.266 | 99.99th=[42730] 00:17:27.266 bw ( KiB/s): min= 96, max= 5112, per=16.74%, avg=3276.80, stdev=2349.98, samples=5 00:17:27.266 iops : min= 24, max= 1278, avg=819.20, stdev=587.50, samples=5 00:17:27.266 lat (usec) : 500=0.95%, 750=34.37%, 1000=61.32% 00:17:27.266 lat (msec) : 2=2.36%, 50=0.95% 00:17:27.266 cpu : usr=0.94%, sys=3.48%, ctx=2115, majf=0, minf=2 00:17:27.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:27.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.266 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.266 issued rwts: total=2115,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:27.266 00:17:27.266 Run status group 0 (all jobs): 00:17:27.266 READ: bw=19.1MiB/s (20.0MB/s), 2294KiB/s-8715KiB/s (2349kB/s-8924kB/s), io=58.8MiB (61.7MB), run=2559-3077msec 00:17:27.266 00:17:27.266 Disk stats (read/write): 00:17:27.266 nvme0n1: ios=1612/0, merge=0/0, ticks=2716/0, in_queue=2716, util=93.62% 00:17:27.266 nvme0n2: ios=4742/0, merge=0/0, ticks=2435/0, in_queue=2435, util=94.29% 00:17:27.266 nvme0n3: ios=5911/0, merge=0/0, ticks=2351/0, in_queue=2351, util=96.03% 00:17:27.266 nvme0n4: ios=1849/0, merge=0/0, ticks=2088/0, in_queue=2088, util=96.06% 00:17:27.266 21:07:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:27.266 21:07:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:27.525 21:07:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:27.525 21:07:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:27.785 21:07:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:27.785 21:07:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:27.785 21:07:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:27.785 21:07:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:28.045 21:07:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:28.045 21:07:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1947629 00:17:28.045 21:07:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:28.045 21:07:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:28.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.045 21:07:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:28.045 21:07:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:17:28.045 21:07:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:28.045 21:07:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.045 21:07:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:28.045 21:07:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.045 21:07:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:17:28.045 21:07:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:28.045 21:07:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:28.045 nvmf hotplug test: fio failed as expected 00:17:28.045 21:07:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:28.305 rmmod nvme_tcp 00:17:28.305 rmmod nvme_fabrics 00:17:28.305 rmmod nvme_keyring 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1944153 ']' 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1944153 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1944153 ']' 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1944153 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1944153 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1944153' 00:17:28.305 killing process with pid 1944153 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1944153 00:17:28.305 21:07:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1944153 00:17:28.565 21:07:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:28.565 21:07:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:28.565 21:07:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:28.565 21:07:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:28.565 21:07:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:28.565 21:07:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.565 21:07:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.565 21:07:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.475 21:07:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:30.475 00:17:30.475 real 0m29.159s 00:17:30.475 user 2m27.815s 00:17:30.475 sys 0m10.413s 00:17:30.475 21:07:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:30.475 21:07:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.475 ************************************ 00:17:30.475 END TEST nvmf_fio_target 00:17:30.475 ************************************ 00:17:30.736 21:07:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:30.736 21:07:57 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:30.736 21:07:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:30.736 21:07:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:30.736 21:07:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:30.736 ************************************ 00:17:30.736 START TEST nvmf_bdevio 00:17:30.736 ************************************ 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:30.736 * Looking for test storage... 00:17:30.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:30.736 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:30.737 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:30.737 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.737 21:07:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.737 21:07:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.737 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:30.737 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:30.737 21:07:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:30.737 21:07:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:38.884 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:38.884 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:38.884 Found net devices under 0000:31:00.0: cvl_0_0 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:38.884 Found net devices under 0000:31:00.1: cvl_0_1 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:38.884 21:08:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:38.884 21:08:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:38.884 21:08:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:38.884 21:08:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:38.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:17:38.884 00:17:38.884 --- 10.0.0.2 ping statistics --- 00:17:38.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.884 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:17:38.884 21:08:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:38.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:17:38.884 00:17:38.885 --- 10.0.0.1 ping statistics --- 00:17:38.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.885 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1953561 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1953561 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1953561 ']' 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.885 21:08:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:39.146 [2024-07-15 21:08:06.191401] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:17:39.146 [2024-07-15 21:08:06.191476] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.146 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.146 [2024-07-15 21:08:06.287558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:39.146 [2024-07-15 21:08:06.381738] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.146 [2024-07-15 21:08:06.381801] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.146 [2024-07-15 21:08:06.381810] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.146 [2024-07-15 21:08:06.381817] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.146 [2024-07-15 21:08:06.381823] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.146 [2024-07-15 21:08:06.381988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:39.146 [2024-07-15 21:08:06.382149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:39.146 [2024-07-15 21:08:06.382310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:39.146 [2024-07-15 21:08:06.382333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:39.719 21:08:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.719 21:08:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:17:39.719 21:08:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:39.719 21:08:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:39.719 21:08:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:39.981 [2024-07-15 21:08:07.034530] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:39.981 Malloc0 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:39.981 [2024-07-15 21:08:07.099488] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:39.981 { 00:17:39.981 "params": { 00:17:39.981 "name": "Nvme$subsystem", 00:17:39.981 "trtype": "$TEST_TRANSPORT", 00:17:39.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:39.981 "adrfam": "ipv4", 00:17:39.981 "trsvcid": "$NVMF_PORT", 00:17:39.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:39.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:39.981 "hdgst": ${hdgst:-false}, 00:17:39.981 "ddgst": ${ddgst:-false} 00:17:39.981 }, 00:17:39.981 "method": "bdev_nvme_attach_controller" 00:17:39.981 } 00:17:39.981 EOF 00:17:39.981 )") 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:39.981 21:08:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:39.981 "params": { 00:17:39.981 "name": "Nvme1", 00:17:39.981 "trtype": "tcp", 00:17:39.981 "traddr": "10.0.0.2", 00:17:39.981 "adrfam": "ipv4", 00:17:39.981 "trsvcid": "4420", 00:17:39.981 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.981 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:39.981 "hdgst": false, 00:17:39.981 "ddgst": false 00:17:39.981 }, 00:17:39.981 "method": "bdev_nvme_attach_controller" 00:17:39.981 }' 00:17:39.981 [2024-07-15 21:08:07.154260] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:17:39.981 [2024-07-15 21:08:07.154325] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1953691 ] 00:17:39.981 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.981 [2024-07-15 21:08:07.228585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:40.243 [2024-07-15 21:08:07.303831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.243 [2024-07-15 21:08:07.303950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.243 [2024-07-15 21:08:07.303953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.243 I/O targets: 00:17:40.243 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:40.243 00:17:40.243 00:17:40.243 CUnit - A unit testing framework for C - Version 2.1-3 00:17:40.243 http://cunit.sourceforge.net/ 00:17:40.243 00:17:40.243 00:17:40.243 Suite: bdevio tests on: Nvme1n1 00:17:40.243 Test: blockdev write read block ...passed 00:17:40.504 Test: blockdev write zeroes read block ...passed 00:17:40.504 Test: blockdev write zeroes read no split ...passed 00:17:40.504 Test: blockdev write zeroes read split ...passed 00:17:40.504 Test: blockdev write zeroes read split partial ...passed 00:17:40.504 Test: blockdev reset ...[2024-07-15 21:08:07.618525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:40.504 [2024-07-15 21:08:07.618588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5844d0 (9): Bad file descriptor 00:17:40.504 [2024-07-15 21:08:07.639370] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:40.504 passed 00:17:40.504 Test: blockdev write read 8 blocks ...passed 00:17:40.504 Test: blockdev write read size > 128k ...passed 00:17:40.504 Test: blockdev write read invalid size ...passed 00:17:40.504 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:40.504 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:40.504 Test: blockdev write read max offset ...passed 00:17:40.765 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:40.765 Test: blockdev writev readv 8 blocks ...passed 00:17:40.765 Test: blockdev writev readv 30 x 1block ...passed 00:17:40.765 Test: blockdev writev readv block ...passed 00:17:40.765 Test: blockdev writev readv size > 128k ...passed 00:17:40.765 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:40.765 Test: blockdev comparev and writev ...[2024-07-15 21:08:07.947614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.765 [2024-07-15 21:08:07.947640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.765 [2024-07-15 21:08:07.947651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.765 [2024-07-15 21:08:07.947657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.765 [2024-07-15 21:08:07.948192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.765 [2024-07-15 21:08:07.948201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:40.765 [2024-07-15 21:08:07.948211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.765 [2024-07-15 21:08:07.948217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:40.765 [2024-07-15 21:08:07.948725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.765 [2024-07-15 21:08:07.948734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:40.765 [2024-07-15 21:08:07.948743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.765 [2024-07-15 21:08:07.948748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:40.765 [2024-07-15 21:08:07.949242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.765 [2024-07-15 21:08:07.949251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:40.765 [2024-07-15 21:08:07.949260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.765 [2024-07-15 21:08:07.949265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:40.765 passed 00:17:40.765 Test: blockdev nvme passthru rw ...passed 00:17:40.765 Test: blockdev nvme passthru vendor specific ...[2024-07-15 21:08:08.033090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:40.765 [2024-07-15 21:08:08.033103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:40.765 [2024-07-15 21:08:08.033478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:40.765 [2024-07-15 21:08:08.033489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:40.765 [2024-07-15 21:08:08.033858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:40.765 [2024-07-15 21:08:08.033867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:40.765 [2024-07-15 21:08:08.034228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:40.765 [2024-07-15 21:08:08.034240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:40.765 passed 00:17:40.765 Test: blockdev nvme admin passthru ...passed 00:17:41.026 Test: blockdev copy ...passed 00:17:41.026 00:17:41.026 Run Summary: Type Total Ran Passed Failed Inactive 00:17:41.026 suites 1 1 n/a 0 0 00:17:41.026 tests 23 23 23 0 0 00:17:41.026 asserts 152 152 152 0 n/a 00:17:41.026 00:17:41.026 Elapsed time = 1.292 seconds 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:41.026 rmmod nvme_tcp 00:17:41.026 rmmod nvme_fabrics 00:17:41.026 rmmod nvme_keyring 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1953561 ']' 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1953561 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1953561 ']' 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1953561 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:41.026 21:08:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1953561 00:17:41.287 21:08:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:41.287 21:08:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:41.287 21:08:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1953561' 00:17:41.287 killing process with pid 1953561 00:17:41.287 21:08:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1953561 00:17:41.287 21:08:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1953561 00:17:41.287 21:08:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:41.287 21:08:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:41.287 21:08:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:41.287 21:08:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:41.287 21:08:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:41.288 21:08:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.288 21:08:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.288 21:08:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.834 21:08:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:43.834 00:17:43.834 real 0m12.709s 00:17:43.834 user 0m12.792s 00:17:43.834 sys 0m6.560s 00:17:43.834 21:08:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:43.834 21:08:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:43.834 ************************************ 00:17:43.834 END TEST nvmf_bdevio 00:17:43.834 ************************************ 00:17:43.834 21:08:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:43.834 21:08:10 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:43.834 21:08:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:43.834 21:08:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:43.834 21:08:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:43.834 ************************************ 00:17:43.834 START TEST nvmf_auth_target 00:17:43.834 ************************************ 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:43.834 * Looking for test storage... 00:17:43.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:43.834 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:43.835 21:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:43.835 21:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:52.011 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:52.011 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:52.011 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:52.012 Found net devices under 0000:31:00.0: cvl_0_0 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:52.012 Found net devices under 0000:31:00.1: cvl_0_1 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:52.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:52.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:17:52.012 00:17:52.012 --- 10.0.0.2 ping statistics --- 00:17:52.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.012 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:52.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:52.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:17:52.012 00:17:52.012 --- 10.0.0.1 ping statistics --- 00:17:52.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.012 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1958695 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1958695 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1958695 ']' 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.012 21:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1958727 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dbd605bfd9804bad60aedc407b69810a2c2284670c2f8f8b 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.owO 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dbd605bfd9804bad60aedc407b69810a2c2284670c2f8f8b 0 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dbd605bfd9804bad60aedc407b69810a2c2284670c2f8f8b 0 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dbd605bfd9804bad60aedc407b69810a2c2284670c2f8f8b 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.owO 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.owO 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.owO 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ed95eb1c1fefe4bb69af5034d2be0c8ba74b200126f9fdd69cf5f16ad35d5806 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ycf 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ed95eb1c1fefe4bb69af5034d2be0c8ba74b200126f9fdd69cf5f16ad35d5806 3 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ed95eb1c1fefe4bb69af5034d2be0c8ba74b200126f9fdd69cf5f16ad35d5806 3 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ed95eb1c1fefe4bb69af5034d2be0c8ba74b200126f9fdd69cf5f16ad35d5806 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ycf 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ycf 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.ycf 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=337947b547f823a2b17d0fa2743c10a7 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.M7B 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 337947b547f823a2b17d0fa2743c10a7 1 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 337947b547f823a2b17d0fa2743c10a7 1 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=337947b547f823a2b17d0fa2743c10a7 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.M7B 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.M7B 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.M7B 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d28b626030586cf76fba18ea8fecac21c25a4d06ca2139fa 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.6rD 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d28b626030586cf76fba18ea8fecac21c25a4d06ca2139fa 2 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d28b626030586cf76fba18ea8fecac21c25a4d06ca2139fa 2 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d28b626030586cf76fba18ea8fecac21c25a4d06ca2139fa 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:52.612 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:52.873 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.6rD 00:17:52.873 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.6rD 00:17:52.873 21:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.6rD 00:17:52.873 21:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:52.873 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7bd7e81cc9862ad5f79a0563a5f83b39e41c49eeba9eab19 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ROH 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7bd7e81cc9862ad5f79a0563a5f83b39e41c49eeba9eab19 2 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7bd7e81cc9862ad5f79a0563a5f83b39e41c49eeba9eab19 2 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7bd7e81cc9862ad5f79a0563a5f83b39e41c49eeba9eab19 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ROH 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ROH 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.ROH 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:52.874 21:08:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e4cc14bbc6186adca6a7c19ce34373a9 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.bgQ 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e4cc14bbc6186adca6a7c19ce34373a9 1 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e4cc14bbc6186adca6a7c19ce34373a9 1 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e4cc14bbc6186adca6a7c19ce34373a9 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.bgQ 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.bgQ 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.bgQ 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d6175d34e5e91dd5de4cbe0fb9da97eb4c78e4595f791e40857aba00b8b86154 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.K3c 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d6175d34e5e91dd5de4cbe0fb9da97eb4c78e4595f791e40857aba00b8b86154 3 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d6175d34e5e91dd5de4cbe0fb9da97eb4c78e4595f791e40857aba00b8b86154 3 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d6175d34e5e91dd5de4cbe0fb9da97eb4c78e4595f791e40857aba00b8b86154 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.K3c 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.K3c 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.K3c 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1958695 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1958695 ']' 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.874 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.135 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.135 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:53.135 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1958727 /var/tmp/host.sock 00:17:53.135 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1958727 ']' 00:17:53.135 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:53.135 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.135 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:53.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:53.135 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.135 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.396 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.396 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:53.396 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:53.396 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.396 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.396 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.396 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:53.396 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.owO 00:17:53.396 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.396 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.396 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.396 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.owO 00:17:53.396 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.owO 00:17:53.396 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.ycf ]] 00:17:53.396 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ycf 00:17:53.396 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.396 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.396 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.396 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ycf 00:17:53.396 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ycf 00:17:53.657 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:53.657 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.M7B 00:17:53.657 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.657 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.657 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.657 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.M7B 00:17:53.657 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.M7B 00:17:53.657 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.6rD ]] 00:17:53.657 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6rD 00:17:53.657 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.657 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.919 21:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.919 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6rD 00:17:53.919 21:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6rD 00:17:53.919 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:53.919 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ROH 00:17:53.919 21:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.919 21:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.919 21:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.919 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ROH 00:17:53.919 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ROH 00:17:54.180 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.bgQ ]] 00:17:54.180 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bgQ 00:17:54.180 21:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.180 21:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.180 21:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.180 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bgQ 00:17:54.180 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bgQ 00:17:54.180 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:54.180 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.K3c 00:17:54.180 21:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.180 21:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.180 21:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.180 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.K3c 00:17:54.180 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.K3c 00:17:54.441 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:54.441 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:54.441 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.441 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.441 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:54.441 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:54.441 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:54.441 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.442 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:54.442 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:54.442 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:54.442 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.442 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.702 21:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.702 21:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.702 21:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.702 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.702 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.702 00:17:54.702 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.702 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.702 21:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.964 21:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.964 21:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.964 21:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.964 21:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.964 21:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.964 21:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.964 { 00:17:54.964 "cntlid": 1, 00:17:54.964 "qid": 0, 00:17:54.964 "state": "enabled", 00:17:54.964 "thread": "nvmf_tgt_poll_group_000", 00:17:54.964 "listen_address": { 00:17:54.964 "trtype": "TCP", 00:17:54.964 "adrfam": "IPv4", 00:17:54.964 "traddr": "10.0.0.2", 00:17:54.964 "trsvcid": "4420" 00:17:54.964 }, 00:17:54.964 "peer_address": { 00:17:54.964 "trtype": "TCP", 00:17:54.964 "adrfam": "IPv4", 00:17:54.964 "traddr": "10.0.0.1", 00:17:54.964 "trsvcid": "47780" 00:17:54.964 }, 00:17:54.964 "auth": { 00:17:54.964 "state": "completed", 00:17:54.964 "digest": "sha256", 00:17:54.964 "dhgroup": "null" 00:17:54.964 } 00:17:54.964 } 00:17:54.964 ]' 00:17:54.964 21:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.964 21:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.964 21:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.964 21:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:54.964 21:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.964 21:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.964 21:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.964 21:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.226 21:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGJkNjA1YmZkOTgwNGJhZDYwYWVkYzQwN2I2OTgxMGEyYzIyODQ2NzBjMmY4ZjhiCQgn+w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ5NWViMWMxZmVmZTRiYjY5YWY1MDM0ZDJiZTBjOGJhNzRiMjAwMTI2ZjlmZGQ2OWNmNWYxNmFkMzVkNTgwNvt1FDc=: 00:17:56.184 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.184 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:56.184 21:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.184 21:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.184 21:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.184 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.184 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:56.184 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:56.184 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:56.184 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.184 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.184 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:56.184 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:56.184 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.184 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.184 21:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.184 21:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.184 21:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.184 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.184 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.451 00:17:56.451 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.451 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.451 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.451 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.451 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.451 21:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.451 21:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.451 21:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.451 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.451 { 00:17:56.451 "cntlid": 3, 00:17:56.451 "qid": 0, 00:17:56.451 "state": "enabled", 00:17:56.451 "thread": "nvmf_tgt_poll_group_000", 00:17:56.451 "listen_address": { 00:17:56.451 "trtype": "TCP", 00:17:56.451 "adrfam": "IPv4", 00:17:56.451 "traddr": "10.0.0.2", 00:17:56.451 "trsvcid": "4420" 00:17:56.451 }, 00:17:56.451 "peer_address": { 00:17:56.451 "trtype": "TCP", 00:17:56.451 "adrfam": "IPv4", 00:17:56.451 "traddr": "10.0.0.1", 00:17:56.451 "trsvcid": "56504" 00:17:56.451 }, 00:17:56.451 "auth": { 00:17:56.451 "state": "completed", 00:17:56.451 "digest": "sha256", 00:17:56.451 "dhgroup": "null" 00:17:56.451 } 00:17:56.451 } 00:17:56.451 ]' 00:17:56.451 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.451 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.451 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.451 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:56.451 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.711 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.711 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.711 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.711 21:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MzM3OTQ3YjU0N2Y4MjNhMmIxN2QwZmEyNzQzYzEwYTdYqBUu: --dhchap-ctrl-secret DHHC-1:02:ZDI4YjYyNjAzMDU4NmNmNzZmYmExOGVhOGZlY2FjMjFjMjVhNGQwNmNhMjEzOWZh+vE8Ug==: 00:17:57.653 21:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.653 21:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:57.653 21:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.653 21:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.653 21:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.653 21:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.653 21:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:57.653 21:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:57.653 21:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:57.653 21:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.653 21:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:57.653 21:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:57.654 21:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:57.654 21:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.654 21:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.654 21:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.654 21:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.654 21:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.654 21:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.654 21:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.915 00:17:57.915 21:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.915 21:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.915 21:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.175 21:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.175 21:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.175 21:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.175 21:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.175 21:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.175 21:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.175 { 00:17:58.175 "cntlid": 5, 00:17:58.175 "qid": 0, 00:17:58.176 "state": "enabled", 00:17:58.176 "thread": "nvmf_tgt_poll_group_000", 00:17:58.176 "listen_address": { 00:17:58.176 "trtype": "TCP", 00:17:58.176 "adrfam": "IPv4", 00:17:58.176 "traddr": "10.0.0.2", 00:17:58.176 "trsvcid": "4420" 00:17:58.176 }, 00:17:58.176 "peer_address": { 00:17:58.176 "trtype": "TCP", 00:17:58.176 "adrfam": "IPv4", 00:17:58.176 "traddr": "10.0.0.1", 00:17:58.176 "trsvcid": "56532" 00:17:58.176 }, 00:17:58.176 "auth": { 00:17:58.176 "state": "completed", 00:17:58.176 "digest": "sha256", 00:17:58.176 "dhgroup": "null" 00:17:58.176 } 00:17:58.176 } 00:17:58.176 ]' 00:17:58.176 21:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.176 21:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.176 21:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.176 21:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:58.176 21:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.176 21:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.176 21:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.176 21:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.436 21:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:N2JkN2U4MWNjOTg2MmFkNWY3OWEwNTYzYTVmODNiMzllNDFjNDllZWJhOWVhYjE5tCdc0A==: --dhchap-ctrl-secret DHHC-1:01:ZTRjYzE0YmJjNjE4NmFkY2E2YTdjMTljZTM0MzczYTn7P0A8: 00:17:59.008 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.008 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:59.008 21:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.008 21:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.008 21:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.008 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.008 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:59.008 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:59.270 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:59.270 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.270 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:59.270 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:59.270 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:59.270 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.270 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:59.270 21:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.270 21:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.270 21:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.270 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.270 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.531 00:17:59.531 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.531 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.531 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.531 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.531 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.531 21:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.531 21:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.531 21:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.531 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.531 { 00:17:59.531 "cntlid": 7, 00:17:59.531 "qid": 0, 00:17:59.531 "state": "enabled", 00:17:59.531 "thread": "nvmf_tgt_poll_group_000", 00:17:59.531 "listen_address": { 00:17:59.531 "trtype": "TCP", 00:17:59.531 "adrfam": "IPv4", 00:17:59.531 "traddr": "10.0.0.2", 00:17:59.531 "trsvcid": "4420" 00:17:59.531 }, 00:17:59.531 "peer_address": { 00:17:59.531 "trtype": "TCP", 00:17:59.531 "adrfam": "IPv4", 00:17:59.531 "traddr": "10.0.0.1", 00:17:59.531 "trsvcid": "56556" 00:17:59.531 }, 00:17:59.531 "auth": { 00:17:59.531 "state": "completed", 00:17:59.531 "digest": "sha256", 00:17:59.531 "dhgroup": "null" 00:17:59.531 } 00:17:59.531 } 00:17:59.531 ]' 00:17:59.531 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.791 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.791 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.791 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:59.792 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.792 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.792 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.792 21:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.792 21:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDYxNzVkMzRlNWU5MWRkNWRlNGNiZTBmYjlkYTk3ZWI0Yzc4ZTQ1OTVmNzkxZTQwODU3YWJhMDBiOGI4NjE1NBaAYtM=: 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.732 21:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.992 00:18:00.992 21:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.992 21:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.992 21:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.252 21:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.252 21:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.252 21:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.252 21:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.252 21:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.252 21:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.252 { 00:18:01.252 "cntlid": 9, 00:18:01.252 "qid": 0, 00:18:01.252 "state": "enabled", 00:18:01.252 "thread": "nvmf_tgt_poll_group_000", 00:18:01.252 "listen_address": { 00:18:01.252 "trtype": "TCP", 00:18:01.252 "adrfam": "IPv4", 00:18:01.252 "traddr": "10.0.0.2", 00:18:01.252 "trsvcid": "4420" 00:18:01.252 }, 00:18:01.252 "peer_address": { 00:18:01.252 "trtype": "TCP", 00:18:01.252 "adrfam": "IPv4", 00:18:01.252 "traddr": "10.0.0.1", 00:18:01.252 "trsvcid": "56590" 00:18:01.252 }, 00:18:01.252 "auth": { 00:18:01.252 "state": "completed", 00:18:01.252 "digest": "sha256", 00:18:01.252 "dhgroup": "ffdhe2048" 00:18:01.252 } 00:18:01.252 } 00:18:01.252 ]' 00:18:01.252 21:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.252 21:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.252 21:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.252 21:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:01.252 21:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.252 21:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.252 21:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.252 21:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.513 21:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGJkNjA1YmZkOTgwNGJhZDYwYWVkYzQwN2I2OTgxMGEyYzIyODQ2NzBjMmY4ZjhiCQgn+w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ5NWViMWMxZmVmZTRiYjY5YWY1MDM0ZDJiZTBjOGJhNzRiMjAwMTI2ZjlmZGQ2OWNmNWYxNmFkMzVkNTgwNvt1FDc=: 00:18:02.084 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.084 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:02.084 21:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.084 21:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.345 21:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.345 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.345 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:02.345 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:02.345 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:02.345 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.345 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:02.345 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:02.345 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:02.345 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.345 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.345 21:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.345 21:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.345 21:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.345 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.345 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.606 00:18:02.606 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.606 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.606 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.866 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.866 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.866 21:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.866 21:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.866 21:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.866 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.866 { 00:18:02.866 "cntlid": 11, 00:18:02.866 "qid": 0, 00:18:02.866 "state": "enabled", 00:18:02.866 "thread": "nvmf_tgt_poll_group_000", 00:18:02.866 "listen_address": { 00:18:02.866 "trtype": "TCP", 00:18:02.866 "adrfam": "IPv4", 00:18:02.866 "traddr": "10.0.0.2", 00:18:02.866 "trsvcid": "4420" 00:18:02.866 }, 00:18:02.866 "peer_address": { 00:18:02.866 "trtype": "TCP", 00:18:02.866 "adrfam": "IPv4", 00:18:02.866 "traddr": "10.0.0.1", 00:18:02.866 "trsvcid": "56608" 00:18:02.866 }, 00:18:02.866 "auth": { 00:18:02.866 "state": "completed", 00:18:02.866 "digest": "sha256", 00:18:02.866 "dhgroup": "ffdhe2048" 00:18:02.866 } 00:18:02.866 } 00:18:02.866 ]' 00:18:02.866 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.866 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.866 21:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.866 21:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:02.866 21:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.866 21:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.866 21:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.866 21:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.126 21:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MzM3OTQ3YjU0N2Y4MjNhMmIxN2QwZmEyNzQzYzEwYTdYqBUu: --dhchap-ctrl-secret DHHC-1:02:ZDI4YjYyNjAzMDU4NmNmNzZmYmExOGVhOGZlY2FjMjFjMjVhNGQwNmNhMjEzOWZh+vE8Ug==: 00:18:03.697 21:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.697 21:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:03.697 21:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.697 21:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.697 21:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.697 21:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.697 21:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:03.697 21:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:03.957 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:03.957 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.957 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:03.957 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:03.957 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:03.957 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.957 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.957 21:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.957 21:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.957 21:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.957 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.957 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.219 00:18:04.219 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.219 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.219 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.219 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.479 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.479 21:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.479 21:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.479 21:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.479 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.479 { 00:18:04.479 "cntlid": 13, 00:18:04.479 "qid": 0, 00:18:04.479 "state": "enabled", 00:18:04.479 "thread": "nvmf_tgt_poll_group_000", 00:18:04.479 "listen_address": { 00:18:04.479 "trtype": "TCP", 00:18:04.479 "adrfam": "IPv4", 00:18:04.479 "traddr": "10.0.0.2", 00:18:04.479 "trsvcid": "4420" 00:18:04.479 }, 00:18:04.479 "peer_address": { 00:18:04.479 "trtype": "TCP", 00:18:04.479 "adrfam": "IPv4", 00:18:04.479 "traddr": "10.0.0.1", 00:18:04.479 "trsvcid": "56620" 00:18:04.479 }, 00:18:04.479 "auth": { 00:18:04.479 "state": "completed", 00:18:04.479 "digest": "sha256", 00:18:04.479 "dhgroup": "ffdhe2048" 00:18:04.479 } 00:18:04.479 } 00:18:04.479 ]' 00:18:04.479 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.479 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.479 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.479 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:04.479 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.479 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.479 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.479 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.739 21:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:N2JkN2U4MWNjOTg2MmFkNWY3OWEwNTYzYTVmODNiMzllNDFjNDllZWJhOWVhYjE5tCdc0A==: --dhchap-ctrl-secret DHHC-1:01:ZTRjYzE0YmJjNjE4NmFkY2E2YTdjMTljZTM0MzczYTn7P0A8: 00:18:05.308 21:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.308 21:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:05.308 21:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.308 21:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.308 21:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.308 21:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.308 21:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:05.308 21:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:05.568 21:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:05.568 21:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.568 21:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:05.568 21:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:05.568 21:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:05.568 21:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.569 21:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:05.569 21:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.569 21:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.569 21:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.569 21:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.569 21:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.829 00:18:05.829 21:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.829 21:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.829 21:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.829 21:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.829 21:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.829 21:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.829 21:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.829 21:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.829 21:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.829 { 00:18:05.829 "cntlid": 15, 00:18:05.829 "qid": 0, 00:18:05.829 "state": "enabled", 00:18:05.829 "thread": "nvmf_tgt_poll_group_000", 00:18:05.829 "listen_address": { 00:18:05.829 "trtype": "TCP", 00:18:05.829 "adrfam": "IPv4", 00:18:05.829 "traddr": "10.0.0.2", 00:18:05.829 "trsvcid": "4420" 00:18:05.829 }, 00:18:05.829 "peer_address": { 00:18:05.829 "trtype": "TCP", 00:18:05.829 "adrfam": "IPv4", 00:18:05.829 "traddr": "10.0.0.1", 00:18:05.829 "trsvcid": "56650" 00:18:05.829 }, 00:18:05.829 "auth": { 00:18:05.829 "state": "completed", 00:18:05.829 "digest": "sha256", 00:18:05.829 "dhgroup": "ffdhe2048" 00:18:05.829 } 00:18:05.829 } 00:18:05.829 ]' 00:18:05.829 21:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.089 21:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.089 21:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.089 21:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:06.089 21:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.089 21:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.089 21:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.089 21:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.349 21:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDYxNzVkMzRlNWU5MWRkNWRlNGNiZTBmYjlkYTk3ZWI0Yzc4ZTQ1OTVmNzkxZTQwODU3YWJhMDBiOGI4NjE1NBaAYtM=: 00:18:06.919 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.919 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:06.919 21:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.919 21:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.919 21:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.919 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.919 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.919 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:06.919 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:07.181 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:07.181 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.181 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:07.181 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:07.181 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:07.181 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.181 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.181 21:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.181 21:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.181 21:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.181 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.181 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.440 00:18:07.440 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.440 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.440 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.440 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.440 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.440 21:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.440 21:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.441 21:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.441 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.441 { 00:18:07.441 "cntlid": 17, 00:18:07.441 "qid": 0, 00:18:07.441 "state": "enabled", 00:18:07.441 "thread": "nvmf_tgt_poll_group_000", 00:18:07.441 "listen_address": { 00:18:07.441 "trtype": "TCP", 00:18:07.441 "adrfam": "IPv4", 00:18:07.441 "traddr": "10.0.0.2", 00:18:07.441 "trsvcid": "4420" 00:18:07.441 }, 00:18:07.441 "peer_address": { 00:18:07.441 "trtype": "TCP", 00:18:07.441 "adrfam": "IPv4", 00:18:07.441 "traddr": "10.0.0.1", 00:18:07.441 "trsvcid": "52242" 00:18:07.441 }, 00:18:07.441 "auth": { 00:18:07.441 "state": "completed", 00:18:07.441 "digest": "sha256", 00:18:07.441 "dhgroup": "ffdhe3072" 00:18:07.441 } 00:18:07.441 } 00:18:07.441 ]' 00:18:07.441 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.441 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.441 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.702 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:07.702 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.702 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.702 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.702 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.702 21:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGJkNjA1YmZkOTgwNGJhZDYwYWVkYzQwN2I2OTgxMGEyYzIyODQ2NzBjMmY4ZjhiCQgn+w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ5NWViMWMxZmVmZTRiYjY5YWY1MDM0ZDJiZTBjOGJhNzRiMjAwMTI2ZjlmZGQ2OWNmNWYxNmFkMzVkNTgwNvt1FDc=: 00:18:08.646 21:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.646 21:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:08.646 21:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.646 21:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.646 21:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.646 21:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.646 21:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:08.646 21:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:08.646 21:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:08.646 21:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.646 21:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.646 21:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:08.646 21:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:08.646 21:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.646 21:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.646 21:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.646 21:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.646 21:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.646 21:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.646 21:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.907 00:18:08.908 21:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.908 21:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.908 21:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.169 21:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.169 21:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.169 21:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.169 21:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.169 21:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.169 21:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.169 { 00:18:09.169 "cntlid": 19, 00:18:09.169 "qid": 0, 00:18:09.169 "state": "enabled", 00:18:09.169 "thread": "nvmf_tgt_poll_group_000", 00:18:09.169 "listen_address": { 00:18:09.169 "trtype": "TCP", 00:18:09.169 "adrfam": "IPv4", 00:18:09.169 "traddr": "10.0.0.2", 00:18:09.169 "trsvcid": "4420" 00:18:09.169 }, 00:18:09.169 "peer_address": { 00:18:09.169 "trtype": "TCP", 00:18:09.169 "adrfam": "IPv4", 00:18:09.169 "traddr": "10.0.0.1", 00:18:09.169 "trsvcid": "52270" 00:18:09.169 }, 00:18:09.169 "auth": { 00:18:09.169 "state": "completed", 00:18:09.169 "digest": "sha256", 00:18:09.169 "dhgroup": "ffdhe3072" 00:18:09.169 } 00:18:09.169 } 00:18:09.169 ]' 00:18:09.169 21:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.169 21:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.169 21:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.169 21:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:09.169 21:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.170 21:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.170 21:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.170 21:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.430 21:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MzM3OTQ3YjU0N2Y4MjNhMmIxN2QwZmEyNzQzYzEwYTdYqBUu: --dhchap-ctrl-secret DHHC-1:02:ZDI4YjYyNjAzMDU4NmNmNzZmYmExOGVhOGZlY2FjMjFjMjVhNGQwNmNhMjEzOWZh+vE8Ug==: 00:18:10.003 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.003 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:10.003 21:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.003 21:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.003 21:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.003 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.003 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:10.003 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:10.264 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:10.264 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.264 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:10.264 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:10.264 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:10.264 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.264 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.264 21:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.264 21:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.264 21:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.264 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.264 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.525 00:18:10.525 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.525 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.525 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.786 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.786 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.786 21:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.786 21:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.786 21:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.786 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.786 { 00:18:10.786 "cntlid": 21, 00:18:10.786 "qid": 0, 00:18:10.786 "state": "enabled", 00:18:10.786 "thread": "nvmf_tgt_poll_group_000", 00:18:10.786 "listen_address": { 00:18:10.786 "trtype": "TCP", 00:18:10.786 "adrfam": "IPv4", 00:18:10.786 "traddr": "10.0.0.2", 00:18:10.786 "trsvcid": "4420" 00:18:10.786 }, 00:18:10.786 "peer_address": { 00:18:10.786 "trtype": "TCP", 00:18:10.786 "adrfam": "IPv4", 00:18:10.786 "traddr": "10.0.0.1", 00:18:10.786 "trsvcid": "52296" 00:18:10.786 }, 00:18:10.786 "auth": { 00:18:10.786 "state": "completed", 00:18:10.786 "digest": "sha256", 00:18:10.786 "dhgroup": "ffdhe3072" 00:18:10.786 } 00:18:10.786 } 00:18:10.786 ]' 00:18:10.786 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.786 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.786 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.786 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:10.786 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.786 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.786 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.786 21:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.047 21:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:N2JkN2U4MWNjOTg2MmFkNWY3OWEwNTYzYTVmODNiMzllNDFjNDllZWJhOWVhYjE5tCdc0A==: --dhchap-ctrl-secret DHHC-1:01:ZTRjYzE0YmJjNjE4NmFkY2E2YTdjMTljZTM0MzczYTn7P0A8: 00:18:11.779 21:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.779 21:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:11.779 21:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.779 21:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.779 21:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.779 21:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.779 21:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:11.779 21:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:11.779 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:11.779 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.779 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.779 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:11.779 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:11.779 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.779 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:11.779 21:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.779 21:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.779 21:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.779 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.779 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.041 00:18:12.041 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.041 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.041 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.302 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.302 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.302 21:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.302 21:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.302 21:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.302 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.302 { 00:18:12.302 "cntlid": 23, 00:18:12.302 "qid": 0, 00:18:12.302 "state": "enabled", 00:18:12.302 "thread": "nvmf_tgt_poll_group_000", 00:18:12.302 "listen_address": { 00:18:12.302 "trtype": "TCP", 00:18:12.302 "adrfam": "IPv4", 00:18:12.302 "traddr": "10.0.0.2", 00:18:12.302 "trsvcid": "4420" 00:18:12.302 }, 00:18:12.302 "peer_address": { 00:18:12.302 "trtype": "TCP", 00:18:12.302 "adrfam": "IPv4", 00:18:12.302 "traddr": "10.0.0.1", 00:18:12.302 "trsvcid": "52326" 00:18:12.302 }, 00:18:12.302 "auth": { 00:18:12.302 "state": "completed", 00:18:12.302 "digest": "sha256", 00:18:12.302 "dhgroup": "ffdhe3072" 00:18:12.302 } 00:18:12.302 } 00:18:12.302 ]' 00:18:12.302 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.302 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.302 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.302 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:12.302 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.302 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.302 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.302 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.563 21:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDYxNzVkMzRlNWU5MWRkNWRlNGNiZTBmYjlkYTk3ZWI0Yzc4ZTQ1OTVmNzkxZTQwODU3YWJhMDBiOGI4NjE1NBaAYtM=: 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.507 21:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.768 00:18:13.768 21:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.768 21:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.768 21:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.029 21:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.029 21:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.029 21:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.030 21:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.030 21:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.030 21:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.030 { 00:18:14.030 "cntlid": 25, 00:18:14.030 "qid": 0, 00:18:14.030 "state": "enabled", 00:18:14.030 "thread": "nvmf_tgt_poll_group_000", 00:18:14.030 "listen_address": { 00:18:14.030 "trtype": "TCP", 00:18:14.030 "adrfam": "IPv4", 00:18:14.030 "traddr": "10.0.0.2", 00:18:14.030 "trsvcid": "4420" 00:18:14.030 }, 00:18:14.030 "peer_address": { 00:18:14.030 "trtype": "TCP", 00:18:14.030 "adrfam": "IPv4", 00:18:14.030 "traddr": "10.0.0.1", 00:18:14.030 "trsvcid": "52350" 00:18:14.030 }, 00:18:14.030 "auth": { 00:18:14.030 "state": "completed", 00:18:14.030 "digest": "sha256", 00:18:14.030 "dhgroup": "ffdhe4096" 00:18:14.030 } 00:18:14.030 } 00:18:14.030 ]' 00:18:14.030 21:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.030 21:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.030 21:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.030 21:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:14.030 21:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.030 21:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.030 21:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.030 21:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.291 21:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGJkNjA1YmZkOTgwNGJhZDYwYWVkYzQwN2I2OTgxMGEyYzIyODQ2NzBjMmY4ZjhiCQgn+w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ5NWViMWMxZmVmZTRiYjY5YWY1MDM0ZDJiZTBjOGJhNzRiMjAwMTI2ZjlmZGQ2OWNmNWYxNmFkMzVkNTgwNvt1FDc=: 00:18:14.862 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.862 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:14.862 21:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.862 21:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.863 21:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.863 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.863 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:14.863 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:15.123 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:15.123 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.123 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:15.123 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:15.123 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:15.123 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.123 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.123 21:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.123 21:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.123 21:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.123 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.123 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.385 00:18:15.385 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.385 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.385 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.385 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.385 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.385 21:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.385 21:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.645 21:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.645 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.645 { 00:18:15.645 "cntlid": 27, 00:18:15.645 "qid": 0, 00:18:15.645 "state": "enabled", 00:18:15.645 "thread": "nvmf_tgt_poll_group_000", 00:18:15.645 "listen_address": { 00:18:15.645 "trtype": "TCP", 00:18:15.645 "adrfam": "IPv4", 00:18:15.645 "traddr": "10.0.0.2", 00:18:15.645 "trsvcid": "4420" 00:18:15.645 }, 00:18:15.645 "peer_address": { 00:18:15.645 "trtype": "TCP", 00:18:15.645 "adrfam": "IPv4", 00:18:15.645 "traddr": "10.0.0.1", 00:18:15.645 "trsvcid": "52380" 00:18:15.645 }, 00:18:15.645 "auth": { 00:18:15.645 "state": "completed", 00:18:15.645 "digest": "sha256", 00:18:15.645 "dhgroup": "ffdhe4096" 00:18:15.645 } 00:18:15.645 } 00:18:15.645 ]' 00:18:15.645 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.645 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.645 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.645 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:15.645 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.645 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.645 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.645 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.906 21:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MzM3OTQ3YjU0N2Y4MjNhMmIxN2QwZmEyNzQzYzEwYTdYqBUu: --dhchap-ctrl-secret DHHC-1:02:ZDI4YjYyNjAzMDU4NmNmNzZmYmExOGVhOGZlY2FjMjFjMjVhNGQwNmNhMjEzOWZh+vE8Ug==: 00:18:16.475 21:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.475 21:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:16.475 21:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.475 21:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.475 21:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.475 21:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.475 21:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:16.475 21:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:16.736 21:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:16.736 21:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.736 21:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:16.736 21:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:16.736 21:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:16.736 21:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.736 21:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.736 21:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.736 21:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.736 21:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.736 21:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.736 21:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.997 00:18:16.997 21:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.997 21:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.997 21:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.997 21:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.998 21:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.998 21:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.998 21:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.998 21:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.998 21:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.998 { 00:18:16.998 "cntlid": 29, 00:18:16.998 "qid": 0, 00:18:16.998 "state": "enabled", 00:18:16.998 "thread": "nvmf_tgt_poll_group_000", 00:18:16.998 "listen_address": { 00:18:16.998 "trtype": "TCP", 00:18:16.998 "adrfam": "IPv4", 00:18:16.998 "traddr": "10.0.0.2", 00:18:16.998 "trsvcid": "4420" 00:18:16.998 }, 00:18:16.998 "peer_address": { 00:18:16.998 "trtype": "TCP", 00:18:16.998 "adrfam": "IPv4", 00:18:16.998 "traddr": "10.0.0.1", 00:18:16.998 "trsvcid": "57350" 00:18:16.998 }, 00:18:16.998 "auth": { 00:18:16.998 "state": "completed", 00:18:16.998 "digest": "sha256", 00:18:16.998 "dhgroup": "ffdhe4096" 00:18:16.998 } 00:18:16.998 } 00:18:16.998 ]' 00:18:16.998 21:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.259 21:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.259 21:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.259 21:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:17.259 21:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.259 21:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.259 21:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.259 21:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.519 21:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:N2JkN2U4MWNjOTg2MmFkNWY3OWEwNTYzYTVmODNiMzllNDFjNDllZWJhOWVhYjE5tCdc0A==: --dhchap-ctrl-secret DHHC-1:01:ZTRjYzE0YmJjNjE4NmFkY2E2YTdjMTljZTM0MzczYTn7P0A8: 00:18:18.091 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.091 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:18.091 21:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.091 21:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.091 21:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.091 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.091 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:18.091 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:18.352 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:18.352 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.352 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:18.352 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:18.352 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:18.352 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.352 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:18.352 21:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.352 21:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.352 21:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.352 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.352 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.613 00:18:18.613 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.613 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.613 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.613 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.613 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.613 21:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.613 21:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.613 21:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.613 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.613 { 00:18:18.613 "cntlid": 31, 00:18:18.613 "qid": 0, 00:18:18.613 "state": "enabled", 00:18:18.613 "thread": "nvmf_tgt_poll_group_000", 00:18:18.613 "listen_address": { 00:18:18.613 "trtype": "TCP", 00:18:18.613 "adrfam": "IPv4", 00:18:18.613 "traddr": "10.0.0.2", 00:18:18.613 "trsvcid": "4420" 00:18:18.613 }, 00:18:18.613 "peer_address": { 00:18:18.613 "trtype": "TCP", 00:18:18.613 "adrfam": "IPv4", 00:18:18.613 "traddr": "10.0.0.1", 00:18:18.613 "trsvcid": "57364" 00:18:18.613 }, 00:18:18.613 "auth": { 00:18:18.613 "state": "completed", 00:18:18.613 "digest": "sha256", 00:18:18.613 "dhgroup": "ffdhe4096" 00:18:18.613 } 00:18:18.613 } 00:18:18.613 ]' 00:18:18.613 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.613 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.613 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.874 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:18.874 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.874 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.874 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.874 21:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.875 21:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDYxNzVkMzRlNWU5MWRkNWRlNGNiZTBmYjlkYTk3ZWI0Yzc4ZTQ1OTVmNzkxZTQwODU3YWJhMDBiOGI4NjE1NBaAYtM=: 00:18:19.818 21:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.818 21:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:19.818 21:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.818 21:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.818 21:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.818 21:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:19.818 21:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.818 21:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:19.818 21:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:19.818 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:19.818 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.818 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:19.818 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:19.818 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:19.818 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.818 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.818 21:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.818 21:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.818 21:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.818 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.818 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.079 00:18:20.340 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.340 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.340 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.340 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.340 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.340 21:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.340 21:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.340 21:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.340 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.340 { 00:18:20.340 "cntlid": 33, 00:18:20.340 "qid": 0, 00:18:20.340 "state": "enabled", 00:18:20.340 "thread": "nvmf_tgt_poll_group_000", 00:18:20.340 "listen_address": { 00:18:20.340 "trtype": "TCP", 00:18:20.340 "adrfam": "IPv4", 00:18:20.340 "traddr": "10.0.0.2", 00:18:20.340 "trsvcid": "4420" 00:18:20.340 }, 00:18:20.340 "peer_address": { 00:18:20.340 "trtype": "TCP", 00:18:20.340 "adrfam": "IPv4", 00:18:20.340 "traddr": "10.0.0.1", 00:18:20.340 "trsvcid": "57406" 00:18:20.340 }, 00:18:20.340 "auth": { 00:18:20.340 "state": "completed", 00:18:20.340 "digest": "sha256", 00:18:20.340 "dhgroup": "ffdhe6144" 00:18:20.340 } 00:18:20.340 } 00:18:20.340 ]' 00:18:20.340 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.340 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.340 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.601 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:20.601 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.601 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.601 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.601 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.601 21:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGJkNjA1YmZkOTgwNGJhZDYwYWVkYzQwN2I2OTgxMGEyYzIyODQ2NzBjMmY4ZjhiCQgn+w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ5NWViMWMxZmVmZTRiYjY5YWY1MDM0ZDJiZTBjOGJhNzRiMjAwMTI2ZjlmZGQ2OWNmNWYxNmFkMzVkNTgwNvt1FDc=: 00:18:21.542 21:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.542 21:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:21.542 21:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.542 21:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.542 21:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.542 21:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.542 21:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:21.542 21:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:21.542 21:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:21.542 21:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.542 21:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:21.542 21:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:21.542 21:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:21.542 21:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.542 21:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.542 21:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.542 21:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.542 21:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.542 21:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.542 21:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.803 00:18:21.803 21:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.803 21:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.803 21:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.064 21:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.064 21:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.064 21:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.064 21:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.064 21:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.064 21:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.064 { 00:18:22.064 "cntlid": 35, 00:18:22.064 "qid": 0, 00:18:22.064 "state": "enabled", 00:18:22.064 "thread": "nvmf_tgt_poll_group_000", 00:18:22.064 "listen_address": { 00:18:22.064 "trtype": "TCP", 00:18:22.064 "adrfam": "IPv4", 00:18:22.064 "traddr": "10.0.0.2", 00:18:22.064 "trsvcid": "4420" 00:18:22.064 }, 00:18:22.064 "peer_address": { 00:18:22.064 "trtype": "TCP", 00:18:22.064 "adrfam": "IPv4", 00:18:22.064 "traddr": "10.0.0.1", 00:18:22.064 "trsvcid": "57438" 00:18:22.064 }, 00:18:22.064 "auth": { 00:18:22.064 "state": "completed", 00:18:22.064 "digest": "sha256", 00:18:22.064 "dhgroup": "ffdhe6144" 00:18:22.064 } 00:18:22.064 } 00:18:22.064 ]' 00:18:22.064 21:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.064 21:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.064 21:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.324 21:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:22.324 21:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.324 21:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.324 21:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.324 21:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.324 21:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MzM3OTQ3YjU0N2Y4MjNhMmIxN2QwZmEyNzQzYzEwYTdYqBUu: --dhchap-ctrl-secret DHHC-1:02:ZDI4YjYyNjAzMDU4NmNmNzZmYmExOGVhOGZlY2FjMjFjMjVhNGQwNmNhMjEzOWZh+vE8Ug==: 00:18:23.265 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.265 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:23.265 21:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.265 21:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.265 21:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.265 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.265 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:23.265 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:23.265 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:23.265 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.265 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:23.265 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:23.265 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:23.265 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.265 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.265 21:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.265 21:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.265 21:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.265 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.265 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.526 00:18:23.526 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.526 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.526 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.787 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.787 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.787 21:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.787 21:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.787 21:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.787 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.787 { 00:18:23.787 "cntlid": 37, 00:18:23.787 "qid": 0, 00:18:23.787 "state": "enabled", 00:18:23.787 "thread": "nvmf_tgt_poll_group_000", 00:18:23.787 "listen_address": { 00:18:23.787 "trtype": "TCP", 00:18:23.787 "adrfam": "IPv4", 00:18:23.787 "traddr": "10.0.0.2", 00:18:23.787 "trsvcid": "4420" 00:18:23.787 }, 00:18:23.787 "peer_address": { 00:18:23.787 "trtype": "TCP", 00:18:23.787 "adrfam": "IPv4", 00:18:23.787 "traddr": "10.0.0.1", 00:18:23.787 "trsvcid": "57464" 00:18:23.787 }, 00:18:23.787 "auth": { 00:18:23.787 "state": "completed", 00:18:23.787 "digest": "sha256", 00:18:23.787 "dhgroup": "ffdhe6144" 00:18:23.787 } 00:18:23.787 } 00:18:23.787 ]' 00:18:23.787 21:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.787 21:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.787 21:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.787 21:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:23.787 21:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.047 21:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.047 21:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.047 21:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.048 21:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:N2JkN2U4MWNjOTg2MmFkNWY3OWEwNTYzYTVmODNiMzllNDFjNDllZWJhOWVhYjE5tCdc0A==: --dhchap-ctrl-secret DHHC-1:01:ZTRjYzE0YmJjNjE4NmFkY2E2YTdjMTljZTM0MzczYTn7P0A8: 00:18:24.990 21:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.990 21:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:24.990 21:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.990 21:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.990 21:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.990 21:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.990 21:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:24.990 21:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:24.990 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:24.990 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.990 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:24.990 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:24.990 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:24.990 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.990 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:24.990 21:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.990 21:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.990 21:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.990 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:24.990 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.251 00:18:25.251 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.251 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.251 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.513 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.513 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.513 21:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.513 21:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.513 21:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.513 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.513 { 00:18:25.513 "cntlid": 39, 00:18:25.513 "qid": 0, 00:18:25.513 "state": "enabled", 00:18:25.513 "thread": "nvmf_tgt_poll_group_000", 00:18:25.513 "listen_address": { 00:18:25.513 "trtype": "TCP", 00:18:25.513 "adrfam": "IPv4", 00:18:25.513 "traddr": "10.0.0.2", 00:18:25.513 "trsvcid": "4420" 00:18:25.513 }, 00:18:25.513 "peer_address": { 00:18:25.513 "trtype": "TCP", 00:18:25.513 "adrfam": "IPv4", 00:18:25.513 "traddr": "10.0.0.1", 00:18:25.513 "trsvcid": "57496" 00:18:25.513 }, 00:18:25.513 "auth": { 00:18:25.513 "state": "completed", 00:18:25.513 "digest": "sha256", 00:18:25.513 "dhgroup": "ffdhe6144" 00:18:25.513 } 00:18:25.513 } 00:18:25.513 ]' 00:18:25.513 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.513 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.513 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.513 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:25.513 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.513 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.513 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.513 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.774 21:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDYxNzVkMzRlNWU5MWRkNWRlNGNiZTBmYjlkYTk3ZWI0Yzc4ZTQ1OTVmNzkxZTQwODU3YWJhMDBiOGI4NjE1NBaAYtM=: 00:18:26.344 21:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.344 21:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:26.344 21:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.344 21:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.344 21:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.344 21:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.344 21:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.344 21:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:26.344 21:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:26.604 21:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:26.604 21:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.604 21:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:26.604 21:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:26.604 21:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:26.604 21:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.604 21:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.604 21:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.604 21:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.604 21:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.604 21:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.604 21:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.174 00:18:27.174 21:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.174 21:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.174 21:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.435 21:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.435 21:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.435 21:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.435 21:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.435 21:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.435 21:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.435 { 00:18:27.435 "cntlid": 41, 00:18:27.435 "qid": 0, 00:18:27.435 "state": "enabled", 00:18:27.435 "thread": "nvmf_tgt_poll_group_000", 00:18:27.435 "listen_address": { 00:18:27.435 "trtype": "TCP", 00:18:27.435 "adrfam": "IPv4", 00:18:27.435 "traddr": "10.0.0.2", 00:18:27.435 "trsvcid": "4420" 00:18:27.435 }, 00:18:27.435 "peer_address": { 00:18:27.435 "trtype": "TCP", 00:18:27.435 "adrfam": "IPv4", 00:18:27.435 "traddr": "10.0.0.1", 00:18:27.435 "trsvcid": "39412" 00:18:27.435 }, 00:18:27.435 "auth": { 00:18:27.435 "state": "completed", 00:18:27.435 "digest": "sha256", 00:18:27.435 "dhgroup": "ffdhe8192" 00:18:27.435 } 00:18:27.435 } 00:18:27.435 ]' 00:18:27.435 21:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.435 21:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.435 21:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.435 21:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:27.435 21:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.435 21:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.435 21:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.435 21:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.695 21:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGJkNjA1YmZkOTgwNGJhZDYwYWVkYzQwN2I2OTgxMGEyYzIyODQ2NzBjMmY4ZjhiCQgn+w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ5NWViMWMxZmVmZTRiYjY5YWY1MDM0ZDJiZTBjOGJhNzRiMjAwMTI2ZjlmZGQ2OWNmNWYxNmFkMzVkNTgwNvt1FDc=: 00:18:28.264 21:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.264 21:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:28.264 21:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.264 21:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.264 21:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.264 21:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.264 21:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:28.264 21:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:28.524 21:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:28.524 21:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.524 21:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:28.524 21:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:28.524 21:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:28.524 21:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.524 21:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.524 21:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.524 21:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.524 21:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.524 21:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.524 21:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.094 00:18:29.094 21:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.094 21:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.094 21:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.094 21:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.094 21:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.094 21:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.094 21:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.094 21:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.094 21:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.094 { 00:18:29.094 "cntlid": 43, 00:18:29.094 "qid": 0, 00:18:29.094 "state": "enabled", 00:18:29.094 "thread": "nvmf_tgt_poll_group_000", 00:18:29.094 "listen_address": { 00:18:29.094 "trtype": "TCP", 00:18:29.094 "adrfam": "IPv4", 00:18:29.094 "traddr": "10.0.0.2", 00:18:29.094 "trsvcid": "4420" 00:18:29.094 }, 00:18:29.094 "peer_address": { 00:18:29.094 "trtype": "TCP", 00:18:29.094 "adrfam": "IPv4", 00:18:29.094 "traddr": "10.0.0.1", 00:18:29.094 "trsvcid": "39442" 00:18:29.094 }, 00:18:29.094 "auth": { 00:18:29.094 "state": "completed", 00:18:29.094 "digest": "sha256", 00:18:29.094 "dhgroup": "ffdhe8192" 00:18:29.094 } 00:18:29.094 } 00:18:29.094 ]' 00:18:29.094 21:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.354 21:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:29.354 21:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.354 21:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:29.354 21:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.354 21:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.354 21:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.354 21:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.614 21:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MzM3OTQ3YjU0N2Y4MjNhMmIxN2QwZmEyNzQzYzEwYTdYqBUu: --dhchap-ctrl-secret DHHC-1:02:ZDI4YjYyNjAzMDU4NmNmNzZmYmExOGVhOGZlY2FjMjFjMjVhNGQwNmNhMjEzOWZh+vE8Ug==: 00:18:30.183 21:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.183 21:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:30.183 21:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.183 21:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.183 21:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.183 21:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.183 21:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:30.183 21:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:30.443 21:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:30.443 21:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.443 21:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:30.443 21:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:30.443 21:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:30.443 21:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.443 21:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.443 21:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.443 21:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.443 21:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.443 21:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.443 21:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.012 00:18:31.013 21:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.013 21:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.013 21:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.013 21:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.013 21:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.013 21:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.013 21:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.013 21:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.013 21:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.013 { 00:18:31.013 "cntlid": 45, 00:18:31.013 "qid": 0, 00:18:31.013 "state": "enabled", 00:18:31.013 "thread": "nvmf_tgt_poll_group_000", 00:18:31.013 "listen_address": { 00:18:31.013 "trtype": "TCP", 00:18:31.013 "adrfam": "IPv4", 00:18:31.013 "traddr": "10.0.0.2", 00:18:31.013 "trsvcid": "4420" 00:18:31.013 }, 00:18:31.013 "peer_address": { 00:18:31.013 "trtype": "TCP", 00:18:31.013 "adrfam": "IPv4", 00:18:31.013 "traddr": "10.0.0.1", 00:18:31.013 "trsvcid": "39460" 00:18:31.013 }, 00:18:31.013 "auth": { 00:18:31.013 "state": "completed", 00:18:31.013 "digest": "sha256", 00:18:31.013 "dhgroup": "ffdhe8192" 00:18:31.013 } 00:18:31.013 } 00:18:31.013 ]' 00:18:31.013 21:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.273 21:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.273 21:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.273 21:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:31.273 21:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.273 21:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.273 21:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.273 21:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.273 21:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:N2JkN2U4MWNjOTg2MmFkNWY3OWEwNTYzYTVmODNiMzllNDFjNDllZWJhOWVhYjE5tCdc0A==: --dhchap-ctrl-secret DHHC-1:01:ZTRjYzE0YmJjNjE4NmFkY2E2YTdjMTljZTM0MzczYTn7P0A8: 00:18:32.233 21:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.233 21:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:32.233 21:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.233 21:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.233 21:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.233 21:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.233 21:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:32.233 21:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:32.233 21:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:32.233 21:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.233 21:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:32.234 21:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:32.234 21:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:32.234 21:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.234 21:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:32.234 21:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.234 21:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.234 21:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.234 21:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.234 21:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.804 00:18:32.804 21:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.804 21:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.804 21:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.074 21:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.074 21:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.074 21:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.074 21:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.074 21:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.074 21:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.074 { 00:18:33.074 "cntlid": 47, 00:18:33.074 "qid": 0, 00:18:33.074 "state": "enabled", 00:18:33.074 "thread": "nvmf_tgt_poll_group_000", 00:18:33.074 "listen_address": { 00:18:33.074 "trtype": "TCP", 00:18:33.074 "adrfam": "IPv4", 00:18:33.074 "traddr": "10.0.0.2", 00:18:33.074 "trsvcid": "4420" 00:18:33.074 }, 00:18:33.074 "peer_address": { 00:18:33.074 "trtype": "TCP", 00:18:33.074 "adrfam": "IPv4", 00:18:33.074 "traddr": "10.0.0.1", 00:18:33.074 "trsvcid": "39482" 00:18:33.074 }, 00:18:33.074 "auth": { 00:18:33.074 "state": "completed", 00:18:33.074 "digest": "sha256", 00:18:33.074 "dhgroup": "ffdhe8192" 00:18:33.074 } 00:18:33.074 } 00:18:33.074 ]' 00:18:33.074 21:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.075 21:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:33.075 21:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.075 21:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:33.075 21:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.075 21:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.075 21:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.075 21:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.335 21:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDYxNzVkMzRlNWU5MWRkNWRlNGNiZTBmYjlkYTk3ZWI0Yzc4ZTQ1OTVmNzkxZTQwODU3YWJhMDBiOGI4NjE1NBaAYtM=: 00:18:33.905 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.905 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:33.905 21:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.905 21:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.905 21:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.905 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:33.905 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.905 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.905 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:33.905 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:34.165 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:34.165 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.165 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.165 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:34.165 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:34.165 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.165 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.165 21:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.165 21:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.165 21:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.165 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.165 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.424 00:18:34.424 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.424 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.425 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.683 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.683 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.683 21:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.683 21:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.683 21:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.683 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.683 { 00:18:34.683 "cntlid": 49, 00:18:34.683 "qid": 0, 00:18:34.683 "state": "enabled", 00:18:34.683 "thread": "nvmf_tgt_poll_group_000", 00:18:34.683 "listen_address": { 00:18:34.683 "trtype": "TCP", 00:18:34.683 "adrfam": "IPv4", 00:18:34.683 "traddr": "10.0.0.2", 00:18:34.683 "trsvcid": "4420" 00:18:34.683 }, 00:18:34.683 "peer_address": { 00:18:34.683 "trtype": "TCP", 00:18:34.683 "adrfam": "IPv4", 00:18:34.683 "traddr": "10.0.0.1", 00:18:34.683 "trsvcid": "39510" 00:18:34.683 }, 00:18:34.683 "auth": { 00:18:34.683 "state": "completed", 00:18:34.683 "digest": "sha384", 00:18:34.683 "dhgroup": "null" 00:18:34.683 } 00:18:34.683 } 00:18:34.683 ]' 00:18:34.683 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.683 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.683 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.683 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:34.684 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.684 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.684 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.684 21:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.942 21:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGJkNjA1YmZkOTgwNGJhZDYwYWVkYzQwN2I2OTgxMGEyYzIyODQ2NzBjMmY4ZjhiCQgn+w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ5NWViMWMxZmVmZTRiYjY5YWY1MDM0ZDJiZTBjOGJhNzRiMjAwMTI2ZjlmZGQ2OWNmNWYxNmFkMzVkNTgwNvt1FDc=: 00:18:35.509 21:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.509 21:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:35.509 21:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.509 21:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.509 21:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.509 21:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.509 21:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:35.509 21:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:35.768 21:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:35.768 21:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.768 21:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:35.768 21:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:35.768 21:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:35.768 21:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.768 21:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.768 21:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.768 21:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.768 21:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.769 21:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.769 21:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.028 00:18:36.028 21:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.028 21:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.028 21:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.028 21:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.028 21:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.028 21:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.028 21:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.028 21:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.028 21:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.028 { 00:18:36.028 "cntlid": 51, 00:18:36.028 "qid": 0, 00:18:36.028 "state": "enabled", 00:18:36.028 "thread": "nvmf_tgt_poll_group_000", 00:18:36.028 "listen_address": { 00:18:36.028 "trtype": "TCP", 00:18:36.028 "adrfam": "IPv4", 00:18:36.028 "traddr": "10.0.0.2", 00:18:36.028 "trsvcid": "4420" 00:18:36.028 }, 00:18:36.028 "peer_address": { 00:18:36.028 "trtype": "TCP", 00:18:36.028 "adrfam": "IPv4", 00:18:36.028 "traddr": "10.0.0.1", 00:18:36.028 "trsvcid": "50290" 00:18:36.028 }, 00:18:36.028 "auth": { 00:18:36.028 "state": "completed", 00:18:36.028 "digest": "sha384", 00:18:36.028 "dhgroup": "null" 00:18:36.028 } 00:18:36.028 } 00:18:36.028 ]' 00:18:36.028 21:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.286 21:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.286 21:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.286 21:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:36.286 21:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.286 21:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.286 21:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.286 21:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.545 21:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MzM3OTQ3YjU0N2Y4MjNhMmIxN2QwZmEyNzQzYzEwYTdYqBUu: --dhchap-ctrl-secret DHHC-1:02:ZDI4YjYyNjAzMDU4NmNmNzZmYmExOGVhOGZlY2FjMjFjMjVhNGQwNmNhMjEzOWZh+vE8Ug==: 00:18:37.114 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.114 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:37.114 21:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.114 21:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.114 21:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.114 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.114 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:37.114 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:37.374 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:37.374 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.374 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:37.374 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:37.374 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:37.374 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.374 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.374 21:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.374 21:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.374 21:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.374 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.374 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.633 00:18:37.633 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.633 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.633 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.633 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.633 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.633 21:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.633 21:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.633 21:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.633 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.633 { 00:18:37.633 "cntlid": 53, 00:18:37.633 "qid": 0, 00:18:37.633 "state": "enabled", 00:18:37.633 "thread": "nvmf_tgt_poll_group_000", 00:18:37.633 "listen_address": { 00:18:37.633 "trtype": "TCP", 00:18:37.633 "adrfam": "IPv4", 00:18:37.633 "traddr": "10.0.0.2", 00:18:37.633 "trsvcid": "4420" 00:18:37.633 }, 00:18:37.633 "peer_address": { 00:18:37.633 "trtype": "TCP", 00:18:37.633 "adrfam": "IPv4", 00:18:37.633 "traddr": "10.0.0.1", 00:18:37.633 "trsvcid": "50316" 00:18:37.633 }, 00:18:37.633 "auth": { 00:18:37.633 "state": "completed", 00:18:37.633 "digest": "sha384", 00:18:37.633 "dhgroup": "null" 00:18:37.633 } 00:18:37.633 } 00:18:37.633 ]' 00:18:37.633 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.894 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.894 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.894 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:37.894 21:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.894 21:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.894 21:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.894 21:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.894 21:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:N2JkN2U4MWNjOTg2MmFkNWY3OWEwNTYzYTVmODNiMzllNDFjNDllZWJhOWVhYjE5tCdc0A==: --dhchap-ctrl-secret DHHC-1:01:ZTRjYzE0YmJjNjE4NmFkY2E2YTdjMTljZTM0MzczYTn7P0A8: 00:18:38.833 21:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.833 21:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:38.833 21:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.833 21:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.833 21:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.833 21:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.833 21:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:38.833 21:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:38.833 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:38.833 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.833 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:38.833 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:38.833 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:38.833 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.833 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:38.833 21:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.833 21:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.833 21:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.833 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.833 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.092 00:18:39.093 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.093 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.093 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.353 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.353 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.353 21:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.353 21:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.353 21:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.353 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.353 { 00:18:39.353 "cntlid": 55, 00:18:39.353 "qid": 0, 00:18:39.353 "state": "enabled", 00:18:39.353 "thread": "nvmf_tgt_poll_group_000", 00:18:39.353 "listen_address": { 00:18:39.353 "trtype": "TCP", 00:18:39.353 "adrfam": "IPv4", 00:18:39.353 "traddr": "10.0.0.2", 00:18:39.353 "trsvcid": "4420" 00:18:39.353 }, 00:18:39.353 "peer_address": { 00:18:39.353 "trtype": "TCP", 00:18:39.353 "adrfam": "IPv4", 00:18:39.353 "traddr": "10.0.0.1", 00:18:39.353 "trsvcid": "50348" 00:18:39.353 }, 00:18:39.353 "auth": { 00:18:39.353 "state": "completed", 00:18:39.353 "digest": "sha384", 00:18:39.353 "dhgroup": "null" 00:18:39.353 } 00:18:39.353 } 00:18:39.353 ]' 00:18:39.353 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.353 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.353 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.353 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:39.353 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.353 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.353 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.353 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.637 21:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDYxNzVkMzRlNWU5MWRkNWRlNGNiZTBmYjlkYTk3ZWI0Yzc4ZTQ1OTVmNzkxZTQwODU3YWJhMDBiOGI4NjE1NBaAYtM=: 00:18:40.208 21:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.208 21:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:40.208 21:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.208 21:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.208 21:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.208 21:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.208 21:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.208 21:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:40.208 21:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:40.468 21:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:40.468 21:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.468 21:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.468 21:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:40.468 21:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:40.468 21:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.468 21:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.468 21:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.468 21:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.468 21:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.468 21:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.468 21:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.728 00:18:40.728 21:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.728 21:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.728 21:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.728 21:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.728 21:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.728 21:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.728 21:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.989 21:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.989 21:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.989 { 00:18:40.989 "cntlid": 57, 00:18:40.989 "qid": 0, 00:18:40.989 "state": "enabled", 00:18:40.989 "thread": "nvmf_tgt_poll_group_000", 00:18:40.989 "listen_address": { 00:18:40.989 "trtype": "TCP", 00:18:40.989 "adrfam": "IPv4", 00:18:40.989 "traddr": "10.0.0.2", 00:18:40.989 "trsvcid": "4420" 00:18:40.989 }, 00:18:40.989 "peer_address": { 00:18:40.989 "trtype": "TCP", 00:18:40.989 "adrfam": "IPv4", 00:18:40.989 "traddr": "10.0.0.1", 00:18:40.989 "trsvcid": "50376" 00:18:40.989 }, 00:18:40.989 "auth": { 00:18:40.989 "state": "completed", 00:18:40.989 "digest": "sha384", 00:18:40.989 "dhgroup": "ffdhe2048" 00:18:40.989 } 00:18:40.989 } 00:18:40.989 ]' 00:18:40.989 21:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.989 21:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.989 21:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.989 21:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:40.989 21:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.989 21:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.989 21:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.989 21:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.249 21:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGJkNjA1YmZkOTgwNGJhZDYwYWVkYzQwN2I2OTgxMGEyYzIyODQ2NzBjMmY4ZjhiCQgn+w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ5NWViMWMxZmVmZTRiYjY5YWY1MDM0ZDJiZTBjOGJhNzRiMjAwMTI2ZjlmZGQ2OWNmNWYxNmFkMzVkNTgwNvt1FDc=: 00:18:41.819 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.819 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:41.819 21:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.819 21:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.819 21:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.819 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.819 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:41.819 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:42.079 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:42.079 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.079 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.079 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:42.079 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:42.079 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.079 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.079 21:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.079 21:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.079 21:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.079 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.079 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.339 00:18:42.339 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.339 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.339 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.339 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.339 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.339 21:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.339 21:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.339 21:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.339 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.339 { 00:18:42.339 "cntlid": 59, 00:18:42.339 "qid": 0, 00:18:42.339 "state": "enabled", 00:18:42.339 "thread": "nvmf_tgt_poll_group_000", 00:18:42.339 "listen_address": { 00:18:42.339 "trtype": "TCP", 00:18:42.339 "adrfam": "IPv4", 00:18:42.339 "traddr": "10.0.0.2", 00:18:42.339 "trsvcid": "4420" 00:18:42.339 }, 00:18:42.339 "peer_address": { 00:18:42.339 "trtype": "TCP", 00:18:42.339 "adrfam": "IPv4", 00:18:42.339 "traddr": "10.0.0.1", 00:18:42.339 "trsvcid": "50390" 00:18:42.339 }, 00:18:42.339 "auth": { 00:18:42.339 "state": "completed", 00:18:42.339 "digest": "sha384", 00:18:42.339 "dhgroup": "ffdhe2048" 00:18:42.339 } 00:18:42.339 } 00:18:42.339 ]' 00:18:42.339 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.599 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.600 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.600 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:42.600 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.600 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.600 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.600 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.600 21:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MzM3OTQ3YjU0N2Y4MjNhMmIxN2QwZmEyNzQzYzEwYTdYqBUu: --dhchap-ctrl-secret DHHC-1:02:ZDI4YjYyNjAzMDU4NmNmNzZmYmExOGVhOGZlY2FjMjFjMjVhNGQwNmNhMjEzOWZh+vE8Ug==: 00:18:43.540 21:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.540 21:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:43.540 21:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.540 21:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.540 21:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.540 21:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.540 21:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:43.540 21:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:43.540 21:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:43.540 21:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.540 21:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:43.540 21:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:43.540 21:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:43.540 21:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.540 21:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.540 21:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.540 21:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.540 21:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.540 21:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.540 21:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.801 00:18:43.801 21:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.801 21:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.801 21:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.068 21:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.068 21:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.068 21:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.068 21:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.068 21:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.068 21:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.068 { 00:18:44.068 "cntlid": 61, 00:18:44.068 "qid": 0, 00:18:44.068 "state": "enabled", 00:18:44.068 "thread": "nvmf_tgt_poll_group_000", 00:18:44.068 "listen_address": { 00:18:44.068 "trtype": "TCP", 00:18:44.068 "adrfam": "IPv4", 00:18:44.068 "traddr": "10.0.0.2", 00:18:44.068 "trsvcid": "4420" 00:18:44.068 }, 00:18:44.068 "peer_address": { 00:18:44.068 "trtype": "TCP", 00:18:44.068 "adrfam": "IPv4", 00:18:44.068 "traddr": "10.0.0.1", 00:18:44.068 "trsvcid": "50410" 00:18:44.068 }, 00:18:44.068 "auth": { 00:18:44.068 "state": "completed", 00:18:44.068 "digest": "sha384", 00:18:44.068 "dhgroup": "ffdhe2048" 00:18:44.068 } 00:18:44.068 } 00:18:44.068 ]' 00:18:44.068 21:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.068 21:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.068 21:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.068 21:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:44.068 21:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.068 21:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.068 21:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.068 21:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.328 21:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:N2JkN2U4MWNjOTg2MmFkNWY3OWEwNTYzYTVmODNiMzllNDFjNDllZWJhOWVhYjE5tCdc0A==: --dhchap-ctrl-secret DHHC-1:01:ZTRjYzE0YmJjNjE4NmFkY2E2YTdjMTljZTM0MzczYTn7P0A8: 00:18:44.899 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.899 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:44.899 21:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.899 21:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.899 21:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.899 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.899 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:44.899 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:45.158 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:45.158 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.158 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:45.158 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:45.158 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:45.158 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.158 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:45.158 21:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.158 21:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.158 21:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.158 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.158 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.417 00:18:45.417 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.417 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.417 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.676 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.676 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.676 21:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.676 21:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.676 21:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.676 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.676 { 00:18:45.676 "cntlid": 63, 00:18:45.676 "qid": 0, 00:18:45.676 "state": "enabled", 00:18:45.676 "thread": "nvmf_tgt_poll_group_000", 00:18:45.676 "listen_address": { 00:18:45.676 "trtype": "TCP", 00:18:45.676 "adrfam": "IPv4", 00:18:45.676 "traddr": "10.0.0.2", 00:18:45.676 "trsvcid": "4420" 00:18:45.676 }, 00:18:45.676 "peer_address": { 00:18:45.676 "trtype": "TCP", 00:18:45.676 "adrfam": "IPv4", 00:18:45.676 "traddr": "10.0.0.1", 00:18:45.676 "trsvcid": "50432" 00:18:45.676 }, 00:18:45.676 "auth": { 00:18:45.676 "state": "completed", 00:18:45.676 "digest": "sha384", 00:18:45.676 "dhgroup": "ffdhe2048" 00:18:45.676 } 00:18:45.676 } 00:18:45.676 ]' 00:18:45.676 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.676 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.676 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.676 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:45.676 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.676 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.676 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.676 21:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.954 21:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDYxNzVkMzRlNWU5MWRkNWRlNGNiZTBmYjlkYTk3ZWI0Yzc4ZTQ1OTVmNzkxZTQwODU3YWJhMDBiOGI4NjE1NBaAYtM=: 00:18:46.523 21:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.523 21:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.523 21:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.523 21:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.523 21:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.523 21:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.523 21:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.523 21:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:46.523 21:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:46.784 21:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:46.784 21:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.784 21:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.784 21:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:46.784 21:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:46.784 21:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.784 21:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.784 21:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.784 21:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.784 21:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.784 21:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.784 21:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.045 00:18:47.045 21:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.045 21:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.045 21:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.045 21:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.045 21:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.045 21:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.045 21:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.045 21:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.045 21:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.045 { 00:18:47.045 "cntlid": 65, 00:18:47.045 "qid": 0, 00:18:47.045 "state": "enabled", 00:18:47.045 "thread": "nvmf_tgt_poll_group_000", 00:18:47.045 "listen_address": { 00:18:47.045 "trtype": "TCP", 00:18:47.045 "adrfam": "IPv4", 00:18:47.045 "traddr": "10.0.0.2", 00:18:47.045 "trsvcid": "4420" 00:18:47.045 }, 00:18:47.045 "peer_address": { 00:18:47.045 "trtype": "TCP", 00:18:47.045 "adrfam": "IPv4", 00:18:47.045 "traddr": "10.0.0.1", 00:18:47.045 "trsvcid": "57768" 00:18:47.045 }, 00:18:47.045 "auth": { 00:18:47.045 "state": "completed", 00:18:47.045 "digest": "sha384", 00:18:47.045 "dhgroup": "ffdhe3072" 00:18:47.045 } 00:18:47.045 } 00:18:47.045 ]' 00:18:47.045 21:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.304 21:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.304 21:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.304 21:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:47.304 21:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.304 21:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.304 21:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.304 21:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.564 21:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGJkNjA1YmZkOTgwNGJhZDYwYWVkYzQwN2I2OTgxMGEyYzIyODQ2NzBjMmY4ZjhiCQgn+w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ5NWViMWMxZmVmZTRiYjY5YWY1MDM0ZDJiZTBjOGJhNzRiMjAwMTI2ZjlmZGQ2OWNmNWYxNmFkMzVkNTgwNvt1FDc=: 00:18:48.133 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.133 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:48.133 21:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.133 21:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.133 21:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.133 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.133 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:48.133 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:48.397 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:48.397 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.397 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:48.397 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:48.398 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:48.398 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.398 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.398 21:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.398 21:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.398 21:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.398 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.398 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.702 00:18:48.702 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.702 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.702 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.702 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.702 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.702 21:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.702 21:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.702 21:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.702 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.702 { 00:18:48.702 "cntlid": 67, 00:18:48.702 "qid": 0, 00:18:48.702 "state": "enabled", 00:18:48.702 "thread": "nvmf_tgt_poll_group_000", 00:18:48.702 "listen_address": { 00:18:48.702 "trtype": "TCP", 00:18:48.702 "adrfam": "IPv4", 00:18:48.702 "traddr": "10.0.0.2", 00:18:48.702 "trsvcid": "4420" 00:18:48.702 }, 00:18:48.702 "peer_address": { 00:18:48.702 "trtype": "TCP", 00:18:48.702 "adrfam": "IPv4", 00:18:48.702 "traddr": "10.0.0.1", 00:18:48.702 "trsvcid": "57808" 00:18:48.702 }, 00:18:48.702 "auth": { 00:18:48.702 "state": "completed", 00:18:48.702 "digest": "sha384", 00:18:48.702 "dhgroup": "ffdhe3072" 00:18:48.702 } 00:18:48.702 } 00:18:48.702 ]' 00:18:48.702 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.702 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.702 21:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.009 21:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:49.010 21:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.010 21:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.010 21:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.010 21:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.010 21:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MzM3OTQ3YjU0N2Y4MjNhMmIxN2QwZmEyNzQzYzEwYTdYqBUu: --dhchap-ctrl-secret DHHC-1:02:ZDI4YjYyNjAzMDU4NmNmNzZmYmExOGVhOGZlY2FjMjFjMjVhNGQwNmNhMjEzOWZh+vE8Ug==: 00:18:49.957 21:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.957 21:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:49.957 21:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.957 21:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.958 21:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.958 21:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.958 21:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:49.958 21:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:49.958 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:49.958 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.958 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.958 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:49.958 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:49.958 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.958 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.958 21:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.958 21:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.958 21:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.958 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.958 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.218 00:18:50.218 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.218 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.218 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.478 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.478 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.478 21:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.478 21:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.478 21:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.478 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.478 { 00:18:50.478 "cntlid": 69, 00:18:50.478 "qid": 0, 00:18:50.478 "state": "enabled", 00:18:50.478 "thread": "nvmf_tgt_poll_group_000", 00:18:50.478 "listen_address": { 00:18:50.478 "trtype": "TCP", 00:18:50.478 "adrfam": "IPv4", 00:18:50.479 "traddr": "10.0.0.2", 00:18:50.479 "trsvcid": "4420" 00:18:50.479 }, 00:18:50.479 "peer_address": { 00:18:50.479 "trtype": "TCP", 00:18:50.479 "adrfam": "IPv4", 00:18:50.479 "traddr": "10.0.0.1", 00:18:50.479 "trsvcid": "57846" 00:18:50.479 }, 00:18:50.479 "auth": { 00:18:50.479 "state": "completed", 00:18:50.479 "digest": "sha384", 00:18:50.479 "dhgroup": "ffdhe3072" 00:18:50.479 } 00:18:50.479 } 00:18:50.479 ]' 00:18:50.479 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.479 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.479 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.479 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:50.479 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.479 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.479 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.479 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.739 21:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:N2JkN2U4MWNjOTg2MmFkNWY3OWEwNTYzYTVmODNiMzllNDFjNDllZWJhOWVhYjE5tCdc0A==: --dhchap-ctrl-secret DHHC-1:01:ZTRjYzE0YmJjNjE4NmFkY2E2YTdjMTljZTM0MzczYTn7P0A8: 00:18:51.309 21:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.309 21:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:51.309 21:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.309 21:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.309 21:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.309 21:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.309 21:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:51.309 21:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:51.569 21:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:51.569 21:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.570 21:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:51.570 21:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:51.570 21:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:51.570 21:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.570 21:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:51.570 21:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.570 21:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.570 21:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.570 21:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.570 21:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.829 00:18:51.829 21:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.829 21:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.829 21:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.829 21:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.829 21:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.829 21:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.829 21:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.088 21:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.088 21:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.088 { 00:18:52.088 "cntlid": 71, 00:18:52.088 "qid": 0, 00:18:52.088 "state": "enabled", 00:18:52.088 "thread": "nvmf_tgt_poll_group_000", 00:18:52.088 "listen_address": { 00:18:52.088 "trtype": "TCP", 00:18:52.088 "adrfam": "IPv4", 00:18:52.088 "traddr": "10.0.0.2", 00:18:52.088 "trsvcid": "4420" 00:18:52.088 }, 00:18:52.088 "peer_address": { 00:18:52.088 "trtype": "TCP", 00:18:52.088 "adrfam": "IPv4", 00:18:52.088 "traddr": "10.0.0.1", 00:18:52.088 "trsvcid": "57886" 00:18:52.088 }, 00:18:52.088 "auth": { 00:18:52.088 "state": "completed", 00:18:52.088 "digest": "sha384", 00:18:52.088 "dhgroup": "ffdhe3072" 00:18:52.088 } 00:18:52.088 } 00:18:52.088 ]' 00:18:52.088 21:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.088 21:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:52.088 21:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.088 21:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:52.088 21:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.088 21:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.088 21:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.088 21:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.346 21:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDYxNzVkMzRlNWU5MWRkNWRlNGNiZTBmYjlkYTk3ZWI0Yzc4ZTQ1OTVmNzkxZTQwODU3YWJhMDBiOGI4NjE1NBaAYtM=: 00:18:52.916 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.916 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:52.916 21:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.916 21:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.916 21:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.916 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.916 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.916 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:52.916 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:53.176 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:53.176 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.176 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:53.176 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:53.176 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:53.176 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.176 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.176 21:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.176 21:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.176 21:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.176 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.176 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.436 00:18:53.436 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.436 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.436 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.695 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.696 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.696 21:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.696 21:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.696 21:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.696 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.696 { 00:18:53.696 "cntlid": 73, 00:18:53.696 "qid": 0, 00:18:53.696 "state": "enabled", 00:18:53.696 "thread": "nvmf_tgt_poll_group_000", 00:18:53.696 "listen_address": { 00:18:53.696 "trtype": "TCP", 00:18:53.696 "adrfam": "IPv4", 00:18:53.696 "traddr": "10.0.0.2", 00:18:53.696 "trsvcid": "4420" 00:18:53.696 }, 00:18:53.696 "peer_address": { 00:18:53.696 "trtype": "TCP", 00:18:53.696 "adrfam": "IPv4", 00:18:53.696 "traddr": "10.0.0.1", 00:18:53.696 "trsvcid": "57900" 00:18:53.696 }, 00:18:53.696 "auth": { 00:18:53.696 "state": "completed", 00:18:53.696 "digest": "sha384", 00:18:53.696 "dhgroup": "ffdhe4096" 00:18:53.696 } 00:18:53.696 } 00:18:53.696 ]' 00:18:53.696 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.696 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.696 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.696 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:53.696 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.696 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.696 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.696 21:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.955 21:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGJkNjA1YmZkOTgwNGJhZDYwYWVkYzQwN2I2OTgxMGEyYzIyODQ2NzBjMmY4ZjhiCQgn+w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ5NWViMWMxZmVmZTRiYjY5YWY1MDM0ZDJiZTBjOGJhNzRiMjAwMTI2ZjlmZGQ2OWNmNWYxNmFkMzVkNTgwNvt1FDc=: 00:18:54.523 21:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.523 21:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:54.523 21:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.523 21:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.523 21:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.523 21:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.523 21:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:54.523 21:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:54.782 21:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:54.782 21:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.782 21:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:54.782 21:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:54.782 21:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:54.782 21:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.782 21:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.782 21:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.782 21:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.782 21:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.782 21:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.782 21:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.042 00:18:55.042 21:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.042 21:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.042 21:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.302 21:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.302 21:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.302 21:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.302 21:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.302 21:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.302 21:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.302 { 00:18:55.302 "cntlid": 75, 00:18:55.302 "qid": 0, 00:18:55.302 "state": "enabled", 00:18:55.302 "thread": "nvmf_tgt_poll_group_000", 00:18:55.302 "listen_address": { 00:18:55.302 "trtype": "TCP", 00:18:55.302 "adrfam": "IPv4", 00:18:55.302 "traddr": "10.0.0.2", 00:18:55.302 "trsvcid": "4420" 00:18:55.302 }, 00:18:55.302 "peer_address": { 00:18:55.302 "trtype": "TCP", 00:18:55.302 "adrfam": "IPv4", 00:18:55.302 "traddr": "10.0.0.1", 00:18:55.302 "trsvcid": "57934" 00:18:55.302 }, 00:18:55.302 "auth": { 00:18:55.302 "state": "completed", 00:18:55.302 "digest": "sha384", 00:18:55.302 "dhgroup": "ffdhe4096" 00:18:55.302 } 00:18:55.302 } 00:18:55.302 ]' 00:18:55.302 21:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.302 21:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.302 21:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.302 21:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:55.302 21:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.302 21:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.302 21:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.302 21:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.561 21:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MzM3OTQ3YjU0N2Y4MjNhMmIxN2QwZmEyNzQzYzEwYTdYqBUu: --dhchap-ctrl-secret DHHC-1:02:ZDI4YjYyNjAzMDU4NmNmNzZmYmExOGVhOGZlY2FjMjFjMjVhNGQwNmNhMjEzOWZh+vE8Ug==: 00:18:56.136 21:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.136 21:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:56.136 21:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.136 21:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.136 21:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.136 21:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.136 21:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:56.136 21:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:56.395 21:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:56.395 21:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.395 21:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:56.395 21:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:56.395 21:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:56.395 21:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.395 21:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.395 21:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.395 21:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.395 21:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.395 21:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.395 21:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.654 00:18:56.654 21:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.654 21:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.654 21:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.914 21:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.914 21:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.914 21:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.914 21:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.914 21:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.914 21:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.914 { 00:18:56.914 "cntlid": 77, 00:18:56.914 "qid": 0, 00:18:56.914 "state": "enabled", 00:18:56.914 "thread": "nvmf_tgt_poll_group_000", 00:18:56.914 "listen_address": { 00:18:56.914 "trtype": "TCP", 00:18:56.914 "adrfam": "IPv4", 00:18:56.914 "traddr": "10.0.0.2", 00:18:56.914 "trsvcid": "4420" 00:18:56.914 }, 00:18:56.914 "peer_address": { 00:18:56.914 "trtype": "TCP", 00:18:56.914 "adrfam": "IPv4", 00:18:56.914 "traddr": "10.0.0.1", 00:18:56.914 "trsvcid": "36220" 00:18:56.914 }, 00:18:56.914 "auth": { 00:18:56.914 "state": "completed", 00:18:56.914 "digest": "sha384", 00:18:56.914 "dhgroup": "ffdhe4096" 00:18:56.914 } 00:18:56.914 } 00:18:56.914 ]' 00:18:56.914 21:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.914 21:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.914 21:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.914 21:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:56.914 21:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.914 21:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.914 21:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.914 21:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.174 21:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:N2JkN2U4MWNjOTg2MmFkNWY3OWEwNTYzYTVmODNiMzllNDFjNDllZWJhOWVhYjE5tCdc0A==: --dhchap-ctrl-secret DHHC-1:01:ZTRjYzE0YmJjNjE4NmFkY2E2YTdjMTljZTM0MzczYTn7P0A8: 00:18:57.743 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.743 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:57.743 21:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.743 21:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.003 21:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.003 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.003 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:58.003 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:58.003 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:58.003 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.003 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:58.003 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:58.003 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:58.003 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.003 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:58.003 21:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.003 21:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.003 21:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.003 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.004 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.264 00:18:58.264 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.264 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.264 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.525 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.525 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.525 21:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.525 21:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.525 21:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.525 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.525 { 00:18:58.525 "cntlid": 79, 00:18:58.525 "qid": 0, 00:18:58.525 "state": "enabled", 00:18:58.525 "thread": "nvmf_tgt_poll_group_000", 00:18:58.525 "listen_address": { 00:18:58.525 "trtype": "TCP", 00:18:58.525 "adrfam": "IPv4", 00:18:58.525 "traddr": "10.0.0.2", 00:18:58.525 "trsvcid": "4420" 00:18:58.525 }, 00:18:58.525 "peer_address": { 00:18:58.525 "trtype": "TCP", 00:18:58.525 "adrfam": "IPv4", 00:18:58.525 "traddr": "10.0.0.1", 00:18:58.525 "trsvcid": "36244" 00:18:58.525 }, 00:18:58.525 "auth": { 00:18:58.525 "state": "completed", 00:18:58.525 "digest": "sha384", 00:18:58.525 "dhgroup": "ffdhe4096" 00:18:58.525 } 00:18:58.525 } 00:18:58.525 ]' 00:18:58.525 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.525 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.525 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.525 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:58.525 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.525 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.525 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.525 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.785 21:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDYxNzVkMzRlNWU5MWRkNWRlNGNiZTBmYjlkYTk3ZWI0Yzc4ZTQ1OTVmNzkxZTQwODU3YWJhMDBiOGI4NjE1NBaAYtM=: 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.726 21:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.986 00:18:59.986 21:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.986 21:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.986 21:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.248 21:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.248 21:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.248 21:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.248 21:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.248 21:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.248 21:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.248 { 00:19:00.248 "cntlid": 81, 00:19:00.248 "qid": 0, 00:19:00.248 "state": "enabled", 00:19:00.248 "thread": "nvmf_tgt_poll_group_000", 00:19:00.248 "listen_address": { 00:19:00.248 "trtype": "TCP", 00:19:00.248 "adrfam": "IPv4", 00:19:00.248 "traddr": "10.0.0.2", 00:19:00.248 "trsvcid": "4420" 00:19:00.248 }, 00:19:00.248 "peer_address": { 00:19:00.248 "trtype": "TCP", 00:19:00.248 "adrfam": "IPv4", 00:19:00.248 "traddr": "10.0.0.1", 00:19:00.248 "trsvcid": "36268" 00:19:00.248 }, 00:19:00.248 "auth": { 00:19:00.248 "state": "completed", 00:19:00.248 "digest": "sha384", 00:19:00.248 "dhgroup": "ffdhe6144" 00:19:00.248 } 00:19:00.248 } 00:19:00.248 ]' 00:19:00.248 21:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.248 21:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:00.248 21:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.248 21:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:00.248 21:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.509 21:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.509 21:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.509 21:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.509 21:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGJkNjA1YmZkOTgwNGJhZDYwYWVkYzQwN2I2OTgxMGEyYzIyODQ2NzBjMmY4ZjhiCQgn+w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ5NWViMWMxZmVmZTRiYjY5YWY1MDM0ZDJiZTBjOGJhNzRiMjAwMTI2ZjlmZGQ2OWNmNWYxNmFkMzVkNTgwNvt1FDc=: 00:19:01.081 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.081 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:01.081 21:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.081 21:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.081 21:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.081 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.081 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:01.081 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:01.373 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:01.373 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.373 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:01.373 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:01.373 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:01.373 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.373 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.373 21:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.373 21:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.373 21:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.373 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.373 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.634 00:19:01.634 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.634 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.634 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.894 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.895 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.895 21:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.895 21:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.895 21:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.895 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.895 { 00:19:01.895 "cntlid": 83, 00:19:01.895 "qid": 0, 00:19:01.895 "state": "enabled", 00:19:01.895 "thread": "nvmf_tgt_poll_group_000", 00:19:01.895 "listen_address": { 00:19:01.895 "trtype": "TCP", 00:19:01.895 "adrfam": "IPv4", 00:19:01.895 "traddr": "10.0.0.2", 00:19:01.895 "trsvcid": "4420" 00:19:01.895 }, 00:19:01.895 "peer_address": { 00:19:01.895 "trtype": "TCP", 00:19:01.895 "adrfam": "IPv4", 00:19:01.895 "traddr": "10.0.0.1", 00:19:01.895 "trsvcid": "36302" 00:19:01.895 }, 00:19:01.895 "auth": { 00:19:01.895 "state": "completed", 00:19:01.895 "digest": "sha384", 00:19:01.895 "dhgroup": "ffdhe6144" 00:19:01.895 } 00:19:01.895 } 00:19:01.895 ]' 00:19:01.895 21:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.895 21:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.895 21:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.895 21:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:01.895 21:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.895 21:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.895 21:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.895 21:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.155 21:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MzM3OTQ3YjU0N2Y4MjNhMmIxN2QwZmEyNzQzYzEwYTdYqBUu: --dhchap-ctrl-secret DHHC-1:02:ZDI4YjYyNjAzMDU4NmNmNzZmYmExOGVhOGZlY2FjMjFjMjVhNGQwNmNhMjEzOWZh+vE8Ug==: 00:19:02.726 21:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.726 21:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:02.726 21:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.726 21:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.726 21:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.726 21:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.726 21:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:02.726 21:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:02.987 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:02.987 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.987 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:02.987 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:02.987 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:02.987 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.987 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.987 21:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.987 21:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.987 21:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.987 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.987 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.248 00:19:03.248 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.248 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.248 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.508 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.508 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.508 21:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.508 21:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.508 21:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.508 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.508 { 00:19:03.508 "cntlid": 85, 00:19:03.508 "qid": 0, 00:19:03.508 "state": "enabled", 00:19:03.508 "thread": "nvmf_tgt_poll_group_000", 00:19:03.508 "listen_address": { 00:19:03.508 "trtype": "TCP", 00:19:03.508 "adrfam": "IPv4", 00:19:03.508 "traddr": "10.0.0.2", 00:19:03.508 "trsvcid": "4420" 00:19:03.508 }, 00:19:03.508 "peer_address": { 00:19:03.508 "trtype": "TCP", 00:19:03.508 "adrfam": "IPv4", 00:19:03.508 "traddr": "10.0.0.1", 00:19:03.508 "trsvcid": "36324" 00:19:03.508 }, 00:19:03.508 "auth": { 00:19:03.508 "state": "completed", 00:19:03.508 "digest": "sha384", 00:19:03.508 "dhgroup": "ffdhe6144" 00:19:03.508 } 00:19:03.508 } 00:19:03.508 ]' 00:19:03.508 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.508 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.508 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.508 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:03.508 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.508 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.508 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.508 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.769 21:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:N2JkN2U4MWNjOTg2MmFkNWY3OWEwNTYzYTVmODNiMzllNDFjNDllZWJhOWVhYjE5tCdc0A==: --dhchap-ctrl-secret DHHC-1:01:ZTRjYzE0YmJjNjE4NmFkY2E2YTdjMTljZTM0MzczYTn7P0A8: 00:19:04.339 21:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.339 21:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:04.339 21:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.339 21:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.339 21:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.339 21:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.339 21:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:04.339 21:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:04.599 21:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:04.599 21:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.599 21:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:04.599 21:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:04.599 21:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:04.599 21:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.599 21:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:04.599 21:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.599 21:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.599 21:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.599 21:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.599 21:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.859 00:19:04.859 21:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.859 21:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.859 21:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.119 21:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.119 21:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.119 21:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.119 21:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.119 21:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.119 21:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.119 { 00:19:05.119 "cntlid": 87, 00:19:05.119 "qid": 0, 00:19:05.119 "state": "enabled", 00:19:05.119 "thread": "nvmf_tgt_poll_group_000", 00:19:05.120 "listen_address": { 00:19:05.120 "trtype": "TCP", 00:19:05.120 "adrfam": "IPv4", 00:19:05.120 "traddr": "10.0.0.2", 00:19:05.120 "trsvcid": "4420" 00:19:05.120 }, 00:19:05.120 "peer_address": { 00:19:05.120 "trtype": "TCP", 00:19:05.120 "adrfam": "IPv4", 00:19:05.120 "traddr": "10.0.0.1", 00:19:05.120 "trsvcid": "36354" 00:19:05.120 }, 00:19:05.120 "auth": { 00:19:05.120 "state": "completed", 00:19:05.120 "digest": "sha384", 00:19:05.120 "dhgroup": "ffdhe6144" 00:19:05.120 } 00:19:05.120 } 00:19:05.120 ]' 00:19:05.120 21:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.120 21:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:05.120 21:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.120 21:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:05.120 21:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.120 21:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.120 21:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.120 21:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.380 21:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDYxNzVkMzRlNWU5MWRkNWRlNGNiZTBmYjlkYTk3ZWI0Yzc4ZTQ1OTVmNzkxZTQwODU3YWJhMDBiOGI4NjE1NBaAYtM=: 00:19:05.948 21:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.948 21:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:05.948 21:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.948 21:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.948 21:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.948 21:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.948 21:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.948 21:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:05.948 21:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:06.207 21:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:06.207 21:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.207 21:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:06.207 21:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:06.207 21:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:06.207 21:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.207 21:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.207 21:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.207 21:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.207 21:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.207 21:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.207 21:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.775 00:19:06.775 21:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.775 21:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.775 21:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.775 21:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.775 21:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.775 21:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.775 21:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.775 21:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.775 21:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.775 { 00:19:06.775 "cntlid": 89, 00:19:06.775 "qid": 0, 00:19:06.775 "state": "enabled", 00:19:06.775 "thread": "nvmf_tgt_poll_group_000", 00:19:06.775 "listen_address": { 00:19:06.775 "trtype": "TCP", 00:19:06.775 "adrfam": "IPv4", 00:19:06.775 "traddr": "10.0.0.2", 00:19:06.775 "trsvcid": "4420" 00:19:06.775 }, 00:19:06.775 "peer_address": { 00:19:06.775 "trtype": "TCP", 00:19:06.775 "adrfam": "IPv4", 00:19:06.775 "traddr": "10.0.0.1", 00:19:06.775 "trsvcid": "60720" 00:19:06.775 }, 00:19:06.775 "auth": { 00:19:06.775 "state": "completed", 00:19:06.775 "digest": "sha384", 00:19:06.775 "dhgroup": "ffdhe8192" 00:19:06.775 } 00:19:06.775 } 00:19:06.775 ]' 00:19:07.035 21:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.035 21:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:07.035 21:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.035 21:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:07.035 21:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.035 21:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.035 21:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.035 21:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.293 21:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGJkNjA1YmZkOTgwNGJhZDYwYWVkYzQwN2I2OTgxMGEyYzIyODQ2NzBjMmY4ZjhiCQgn+w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ5NWViMWMxZmVmZTRiYjY5YWY1MDM0ZDJiZTBjOGJhNzRiMjAwMTI2ZjlmZGQ2OWNmNWYxNmFkMzVkNTgwNvt1FDc=: 00:19:07.861 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.862 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:07.862 21:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.862 21:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.862 21:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.862 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.862 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:07.862 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:08.121 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:08.121 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.121 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:08.121 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:08.121 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:08.121 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.121 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.121 21:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.121 21:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.121 21:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.121 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.122 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.692 00:19:08.692 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.692 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.692 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.692 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.692 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.692 21:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.692 21:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.692 21:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.692 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.692 { 00:19:08.692 "cntlid": 91, 00:19:08.693 "qid": 0, 00:19:08.693 "state": "enabled", 00:19:08.693 "thread": "nvmf_tgt_poll_group_000", 00:19:08.693 "listen_address": { 00:19:08.693 "trtype": "TCP", 00:19:08.693 "adrfam": "IPv4", 00:19:08.693 "traddr": "10.0.0.2", 00:19:08.693 "trsvcid": "4420" 00:19:08.693 }, 00:19:08.693 "peer_address": { 00:19:08.693 "trtype": "TCP", 00:19:08.693 "adrfam": "IPv4", 00:19:08.693 "traddr": "10.0.0.1", 00:19:08.693 "trsvcid": "60744" 00:19:08.693 }, 00:19:08.693 "auth": { 00:19:08.693 "state": "completed", 00:19:08.693 "digest": "sha384", 00:19:08.693 "dhgroup": "ffdhe8192" 00:19:08.693 } 00:19:08.693 } 00:19:08.693 ]' 00:19:08.693 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.952 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:08.952 21:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.952 21:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:08.952 21:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.952 21:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.952 21:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.952 21:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.212 21:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MzM3OTQ3YjU0N2Y4MjNhMmIxN2QwZmEyNzQzYzEwYTdYqBUu: --dhchap-ctrl-secret DHHC-1:02:ZDI4YjYyNjAzMDU4NmNmNzZmYmExOGVhOGZlY2FjMjFjMjVhNGQwNmNhMjEzOWZh+vE8Ug==: 00:19:09.782 21:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.782 21:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:09.782 21:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.782 21:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.782 21:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.782 21:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.782 21:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:09.782 21:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:10.042 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:10.042 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.042 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:10.042 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:10.042 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.042 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.042 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.042 21:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.042 21:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.042 21:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.042 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.042 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.614 00:19:10.614 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.614 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.614 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.614 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.614 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.614 21:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.614 21:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.614 21:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.614 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.614 { 00:19:10.614 "cntlid": 93, 00:19:10.614 "qid": 0, 00:19:10.614 "state": "enabled", 00:19:10.614 "thread": "nvmf_tgt_poll_group_000", 00:19:10.614 "listen_address": { 00:19:10.614 "trtype": "TCP", 00:19:10.614 "adrfam": "IPv4", 00:19:10.614 "traddr": "10.0.0.2", 00:19:10.614 "trsvcid": "4420" 00:19:10.614 }, 00:19:10.614 "peer_address": { 00:19:10.614 "trtype": "TCP", 00:19:10.614 "adrfam": "IPv4", 00:19:10.614 "traddr": "10.0.0.1", 00:19:10.614 "trsvcid": "60768" 00:19:10.614 }, 00:19:10.614 "auth": { 00:19:10.614 "state": "completed", 00:19:10.614 "digest": "sha384", 00:19:10.614 "dhgroup": "ffdhe8192" 00:19:10.614 } 00:19:10.614 } 00:19:10.614 ]' 00:19:10.614 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.614 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:10.614 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.875 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:10.875 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.875 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.875 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.875 21:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.875 21:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:N2JkN2U4MWNjOTg2MmFkNWY3OWEwNTYzYTVmODNiMzllNDFjNDllZWJhOWVhYjE5tCdc0A==: --dhchap-ctrl-secret DHHC-1:01:ZTRjYzE0YmJjNjE4NmFkY2E2YTdjMTljZTM0MzczYTn7P0A8: 00:19:11.818 21:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.818 21:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:11.818 21:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.818 21:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.818 21:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.818 21:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.818 21:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:11.818 21:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:11.818 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:11.818 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.818 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:11.818 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:11.818 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:11.818 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.818 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:11.818 21:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.818 21:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.818 21:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.818 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.818 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.391 00:19:12.391 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.391 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.391 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.652 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.652 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.652 21:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.652 21:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.652 21:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.652 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.652 { 00:19:12.652 "cntlid": 95, 00:19:12.652 "qid": 0, 00:19:12.652 "state": "enabled", 00:19:12.652 "thread": "nvmf_tgt_poll_group_000", 00:19:12.652 "listen_address": { 00:19:12.652 "trtype": "TCP", 00:19:12.652 "adrfam": "IPv4", 00:19:12.652 "traddr": "10.0.0.2", 00:19:12.652 "trsvcid": "4420" 00:19:12.652 }, 00:19:12.652 "peer_address": { 00:19:12.652 "trtype": "TCP", 00:19:12.652 "adrfam": "IPv4", 00:19:12.652 "traddr": "10.0.0.1", 00:19:12.652 "trsvcid": "60806" 00:19:12.652 }, 00:19:12.652 "auth": { 00:19:12.652 "state": "completed", 00:19:12.652 "digest": "sha384", 00:19:12.652 "dhgroup": "ffdhe8192" 00:19:12.652 } 00:19:12.652 } 00:19:12.652 ]' 00:19:12.652 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.652 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:12.652 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.652 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:12.652 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.652 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.652 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.652 21:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.912 21:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDYxNzVkMzRlNWU5MWRkNWRlNGNiZTBmYjlkYTk3ZWI0Yzc4ZTQ1OTVmNzkxZTQwODU3YWJhMDBiOGI4NjE1NBaAYtM=: 00:19:13.481 21:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.481 21:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:13.481 21:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.481 21:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.481 21:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.481 21:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:13.481 21:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.481 21:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.481 21:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:13.481 21:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:13.742 21:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:13.742 21:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.742 21:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:13.742 21:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:13.742 21:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:13.742 21:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.742 21:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.742 21:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.742 21:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.742 21:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.742 21:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.742 21:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.742 00:19:14.003 21:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.003 21:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.003 21:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.003 21:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.003 21:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.003 21:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.003 21:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.003 21:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.003 21:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.003 { 00:19:14.003 "cntlid": 97, 00:19:14.003 "qid": 0, 00:19:14.003 "state": "enabled", 00:19:14.003 "thread": "nvmf_tgt_poll_group_000", 00:19:14.003 "listen_address": { 00:19:14.003 "trtype": "TCP", 00:19:14.003 "adrfam": "IPv4", 00:19:14.003 "traddr": "10.0.0.2", 00:19:14.003 "trsvcid": "4420" 00:19:14.003 }, 00:19:14.003 "peer_address": { 00:19:14.003 "trtype": "TCP", 00:19:14.003 "adrfam": "IPv4", 00:19:14.003 "traddr": "10.0.0.1", 00:19:14.003 "trsvcid": "60824" 00:19:14.003 }, 00:19:14.003 "auth": { 00:19:14.003 "state": "completed", 00:19:14.003 "digest": "sha512", 00:19:14.003 "dhgroup": "null" 00:19:14.003 } 00:19:14.003 } 00:19:14.003 ]' 00:19:14.003 21:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.003 21:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.003 21:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.281 21:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:14.281 21:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.281 21:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.281 21:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.281 21:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.281 21:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGJkNjA1YmZkOTgwNGJhZDYwYWVkYzQwN2I2OTgxMGEyYzIyODQ2NzBjMmY4ZjhiCQgn+w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ5NWViMWMxZmVmZTRiYjY5YWY1MDM0ZDJiZTBjOGJhNzRiMjAwMTI2ZjlmZGQ2OWNmNWYxNmFkMzVkNTgwNvt1FDc=: 00:19:15.279 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.279 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:15.279 21:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.279 21:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.279 21:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.279 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.279 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:15.279 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:15.279 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:15.279 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.279 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.279 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:15.279 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:15.279 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.279 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.279 21:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.279 21:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.279 21:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.279 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.279 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.540 00:19:15.540 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.540 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.540 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.540 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.540 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.540 21:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.540 21:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.540 21:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.540 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.540 { 00:19:15.540 "cntlid": 99, 00:19:15.540 "qid": 0, 00:19:15.540 "state": "enabled", 00:19:15.540 "thread": "nvmf_tgt_poll_group_000", 00:19:15.540 "listen_address": { 00:19:15.540 "trtype": "TCP", 00:19:15.540 "adrfam": "IPv4", 00:19:15.540 "traddr": "10.0.0.2", 00:19:15.540 "trsvcid": "4420" 00:19:15.540 }, 00:19:15.540 "peer_address": { 00:19:15.540 "trtype": "TCP", 00:19:15.540 "adrfam": "IPv4", 00:19:15.540 "traddr": "10.0.0.1", 00:19:15.540 "trsvcid": "60850" 00:19:15.540 }, 00:19:15.540 "auth": { 00:19:15.540 "state": "completed", 00:19:15.540 "digest": "sha512", 00:19:15.540 "dhgroup": "null" 00:19:15.540 } 00:19:15.540 } 00:19:15.540 ]' 00:19:15.540 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.799 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.799 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.799 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:15.799 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.800 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.800 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.800 21:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.800 21:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MzM3OTQ3YjU0N2Y4MjNhMmIxN2QwZmEyNzQzYzEwYTdYqBUu: --dhchap-ctrl-secret DHHC-1:02:ZDI4YjYyNjAzMDU4NmNmNzZmYmExOGVhOGZlY2FjMjFjMjVhNGQwNmNhMjEzOWZh+vE8Ug==: 00:19:16.738 21:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.738 21:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:16.738 21:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.738 21:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.738 21:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.738 21:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.738 21:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:16.738 21:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:16.738 21:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:16.738 21:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.738 21:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:16.738 21:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:16.738 21:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:16.738 21:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.738 21:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.738 21:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.738 21:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.738 21:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.738 21:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.738 21:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.998 00:19:16.998 21:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.998 21:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.998 21:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.258 21:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.258 21:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.258 21:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.258 21:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.258 21:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.258 21:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.258 { 00:19:17.258 "cntlid": 101, 00:19:17.258 "qid": 0, 00:19:17.258 "state": "enabled", 00:19:17.258 "thread": "nvmf_tgt_poll_group_000", 00:19:17.258 "listen_address": { 00:19:17.258 "trtype": "TCP", 00:19:17.258 "adrfam": "IPv4", 00:19:17.258 "traddr": "10.0.0.2", 00:19:17.258 "trsvcid": "4420" 00:19:17.258 }, 00:19:17.258 "peer_address": { 00:19:17.258 "trtype": "TCP", 00:19:17.258 "adrfam": "IPv4", 00:19:17.258 "traddr": "10.0.0.1", 00:19:17.258 "trsvcid": "53428" 00:19:17.258 }, 00:19:17.258 "auth": { 00:19:17.258 "state": "completed", 00:19:17.258 "digest": "sha512", 00:19:17.258 "dhgroup": "null" 00:19:17.258 } 00:19:17.258 } 00:19:17.258 ]' 00:19:17.258 21:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.258 21:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.258 21:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.258 21:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:17.258 21:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.258 21:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.258 21:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.258 21:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.518 21:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:N2JkN2U4MWNjOTg2MmFkNWY3OWEwNTYzYTVmODNiMzllNDFjNDllZWJhOWVhYjE5tCdc0A==: --dhchap-ctrl-secret DHHC-1:01:ZTRjYzE0YmJjNjE4NmFkY2E2YTdjMTljZTM0MzczYTn7P0A8: 00:19:18.090 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.090 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:18.090 21:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.090 21:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.090 21:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.090 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.090 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:18.090 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:18.351 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:18.351 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.351 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.352 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:18.352 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:18.352 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.352 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:18.352 21:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.352 21:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.352 21:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.352 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.352 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.612 00:19:18.612 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.612 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.612 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.872 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.872 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.872 21:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.872 21:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.872 21:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.872 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.872 { 00:19:18.872 "cntlid": 103, 00:19:18.872 "qid": 0, 00:19:18.872 "state": "enabled", 00:19:18.872 "thread": "nvmf_tgt_poll_group_000", 00:19:18.872 "listen_address": { 00:19:18.872 "trtype": "TCP", 00:19:18.872 "adrfam": "IPv4", 00:19:18.873 "traddr": "10.0.0.2", 00:19:18.873 "trsvcid": "4420" 00:19:18.873 }, 00:19:18.873 "peer_address": { 00:19:18.873 "trtype": "TCP", 00:19:18.873 "adrfam": "IPv4", 00:19:18.873 "traddr": "10.0.0.1", 00:19:18.873 "trsvcid": "53438" 00:19:18.873 }, 00:19:18.873 "auth": { 00:19:18.873 "state": "completed", 00:19:18.873 "digest": "sha512", 00:19:18.873 "dhgroup": "null" 00:19:18.873 } 00:19:18.873 } 00:19:18.873 ]' 00:19:18.873 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.873 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.873 21:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.873 21:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:18.873 21:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.873 21:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.873 21:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.873 21:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.134 21:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDYxNzVkMzRlNWU5MWRkNWRlNGNiZTBmYjlkYTk3ZWI0Yzc4ZTQ1OTVmNzkxZTQwODU3YWJhMDBiOGI4NjE1NBaAYtM=: 00:19:19.706 21:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.706 21:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:19.706 21:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.706 21:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.706 21:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.706 21:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.706 21:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.706 21:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:19.706 21:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:19.966 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:19.966 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.966 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.967 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:19.967 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:19.967 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.967 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.967 21:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.967 21:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.967 21:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.967 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.967 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.227 00:19:20.227 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.227 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.227 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.227 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.488 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.488 21:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.488 21:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.488 21:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.488 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.488 { 00:19:20.488 "cntlid": 105, 00:19:20.488 "qid": 0, 00:19:20.488 "state": "enabled", 00:19:20.488 "thread": "nvmf_tgt_poll_group_000", 00:19:20.488 "listen_address": { 00:19:20.488 "trtype": "TCP", 00:19:20.488 "adrfam": "IPv4", 00:19:20.488 "traddr": "10.0.0.2", 00:19:20.488 "trsvcid": "4420" 00:19:20.489 }, 00:19:20.489 "peer_address": { 00:19:20.489 "trtype": "TCP", 00:19:20.489 "adrfam": "IPv4", 00:19:20.489 "traddr": "10.0.0.1", 00:19:20.489 "trsvcid": "53458" 00:19:20.489 }, 00:19:20.489 "auth": { 00:19:20.489 "state": "completed", 00:19:20.489 "digest": "sha512", 00:19:20.489 "dhgroup": "ffdhe2048" 00:19:20.489 } 00:19:20.489 } 00:19:20.489 ]' 00:19:20.489 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.489 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.489 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.489 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:20.489 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.489 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.489 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.489 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.749 21:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGJkNjA1YmZkOTgwNGJhZDYwYWVkYzQwN2I2OTgxMGEyYzIyODQ2NzBjMmY4ZjhiCQgn+w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ5NWViMWMxZmVmZTRiYjY5YWY1MDM0ZDJiZTBjOGJhNzRiMjAwMTI2ZjlmZGQ2OWNmNWYxNmFkMzVkNTgwNvt1FDc=: 00:19:21.321 21:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.321 21:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:21.321 21:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.321 21:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.321 21:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.321 21:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.321 21:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:21.321 21:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:21.583 21:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:21.583 21:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.583 21:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:21.583 21:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:21.583 21:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:21.583 21:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.583 21:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.583 21:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.583 21:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.583 21:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.583 21:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.583 21:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.844 00:19:21.844 21:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.844 21:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.844 21:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.844 21:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.844 21:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.844 21:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.844 21:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.844 21:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.844 21:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.844 { 00:19:21.844 "cntlid": 107, 00:19:21.844 "qid": 0, 00:19:21.844 "state": "enabled", 00:19:21.844 "thread": "nvmf_tgt_poll_group_000", 00:19:21.844 "listen_address": { 00:19:21.844 "trtype": "TCP", 00:19:21.844 "adrfam": "IPv4", 00:19:21.844 "traddr": "10.0.0.2", 00:19:21.844 "trsvcid": "4420" 00:19:21.844 }, 00:19:21.844 "peer_address": { 00:19:21.844 "trtype": "TCP", 00:19:21.844 "adrfam": "IPv4", 00:19:21.844 "traddr": "10.0.0.1", 00:19:21.844 "trsvcid": "53478" 00:19:21.844 }, 00:19:21.844 "auth": { 00:19:21.844 "state": "completed", 00:19:21.844 "digest": "sha512", 00:19:21.844 "dhgroup": "ffdhe2048" 00:19:21.844 } 00:19:21.845 } 00:19:21.845 ]' 00:19:21.845 21:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.106 21:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.106 21:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.106 21:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:22.106 21:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.106 21:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.106 21:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.106 21:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.366 21:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MzM3OTQ3YjU0N2Y4MjNhMmIxN2QwZmEyNzQzYzEwYTdYqBUu: --dhchap-ctrl-secret DHHC-1:02:ZDI4YjYyNjAzMDU4NmNmNzZmYmExOGVhOGZlY2FjMjFjMjVhNGQwNmNhMjEzOWZh+vE8Ug==: 00:19:22.939 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.939 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:22.939 21:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.939 21:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.939 21:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.939 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.939 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:22.939 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:23.200 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:23.201 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.201 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:23.201 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:23.201 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:23.201 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.201 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.201 21:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.201 21:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.201 21:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.201 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.201 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.462 00:19:23.462 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.462 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.462 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.462 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.462 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.462 21:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.462 21:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.462 21:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.462 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.462 { 00:19:23.462 "cntlid": 109, 00:19:23.462 "qid": 0, 00:19:23.462 "state": "enabled", 00:19:23.462 "thread": "nvmf_tgt_poll_group_000", 00:19:23.462 "listen_address": { 00:19:23.462 "trtype": "TCP", 00:19:23.462 "adrfam": "IPv4", 00:19:23.462 "traddr": "10.0.0.2", 00:19:23.462 "trsvcid": "4420" 00:19:23.462 }, 00:19:23.462 "peer_address": { 00:19:23.462 "trtype": "TCP", 00:19:23.462 "adrfam": "IPv4", 00:19:23.462 "traddr": "10.0.0.1", 00:19:23.462 "trsvcid": "53502" 00:19:23.462 }, 00:19:23.462 "auth": { 00:19:23.462 "state": "completed", 00:19:23.462 "digest": "sha512", 00:19:23.462 "dhgroup": "ffdhe2048" 00:19:23.462 } 00:19:23.462 } 00:19:23.462 ]' 00:19:23.462 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.462 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.462 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.723 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:23.723 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.723 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.723 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.723 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.723 21:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:N2JkN2U4MWNjOTg2MmFkNWY3OWEwNTYzYTVmODNiMzllNDFjNDllZWJhOWVhYjE5tCdc0A==: --dhchap-ctrl-secret DHHC-1:01:ZTRjYzE0YmJjNjE4NmFkY2E2YTdjMTljZTM0MzczYTn7P0A8: 00:19:24.666 21:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.666 21:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:24.666 21:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.666 21:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.666 21:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.666 21:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.666 21:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:24.666 21:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:24.666 21:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:24.666 21:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.666 21:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:24.666 21:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:24.666 21:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:24.666 21:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.666 21:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:24.666 21:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.666 21:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.666 21:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.666 21:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.666 21:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.927 00:19:24.927 21:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.928 21:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.928 21:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.189 21:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.189 21:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.189 21:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.189 21:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.189 21:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.189 21:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.189 { 00:19:25.189 "cntlid": 111, 00:19:25.189 "qid": 0, 00:19:25.189 "state": "enabled", 00:19:25.189 "thread": "nvmf_tgt_poll_group_000", 00:19:25.189 "listen_address": { 00:19:25.189 "trtype": "TCP", 00:19:25.189 "adrfam": "IPv4", 00:19:25.189 "traddr": "10.0.0.2", 00:19:25.189 "trsvcid": "4420" 00:19:25.189 }, 00:19:25.189 "peer_address": { 00:19:25.189 "trtype": "TCP", 00:19:25.189 "adrfam": "IPv4", 00:19:25.189 "traddr": "10.0.0.1", 00:19:25.189 "trsvcid": "53526" 00:19:25.189 }, 00:19:25.189 "auth": { 00:19:25.189 "state": "completed", 00:19:25.189 "digest": "sha512", 00:19:25.189 "dhgroup": "ffdhe2048" 00:19:25.189 } 00:19:25.189 } 00:19:25.189 ]' 00:19:25.189 21:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.189 21:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.189 21:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.189 21:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:25.189 21:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.189 21:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.189 21:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.189 21:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.451 21:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDYxNzVkMzRlNWU5MWRkNWRlNGNiZTBmYjlkYTk3ZWI0Yzc4ZTQ1OTVmNzkxZTQwODU3YWJhMDBiOGI4NjE1NBaAYtM=: 00:19:26.023 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.023 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:26.023 21:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.023 21:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.023 21:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.023 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.023 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.023 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:26.023 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:26.282 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:26.282 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.283 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:26.283 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:26.283 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:26.283 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.283 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.283 21:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.283 21:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.283 21:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.283 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.283 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.547 00:19:26.547 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.547 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.547 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.808 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.808 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.808 21:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.808 21:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.808 21:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.808 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.808 { 00:19:26.808 "cntlid": 113, 00:19:26.808 "qid": 0, 00:19:26.808 "state": "enabled", 00:19:26.808 "thread": "nvmf_tgt_poll_group_000", 00:19:26.808 "listen_address": { 00:19:26.808 "trtype": "TCP", 00:19:26.808 "adrfam": "IPv4", 00:19:26.808 "traddr": "10.0.0.2", 00:19:26.808 "trsvcid": "4420" 00:19:26.808 }, 00:19:26.808 "peer_address": { 00:19:26.808 "trtype": "TCP", 00:19:26.808 "adrfam": "IPv4", 00:19:26.808 "traddr": "10.0.0.1", 00:19:26.808 "trsvcid": "60486" 00:19:26.808 }, 00:19:26.808 "auth": { 00:19:26.808 "state": "completed", 00:19:26.808 "digest": "sha512", 00:19:26.808 "dhgroup": "ffdhe3072" 00:19:26.808 } 00:19:26.808 } 00:19:26.808 ]' 00:19:26.808 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.808 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.808 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.808 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:26.808 21:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.808 21:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.808 21:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.808 21:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.069 21:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGJkNjA1YmZkOTgwNGJhZDYwYWVkYzQwN2I2OTgxMGEyYzIyODQ2NzBjMmY4ZjhiCQgn+w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ5NWViMWMxZmVmZTRiYjY5YWY1MDM0ZDJiZTBjOGJhNzRiMjAwMTI2ZjlmZGQ2OWNmNWYxNmFkMzVkNTgwNvt1FDc=: 00:19:27.640 21:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.640 21:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:27.640 21:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.640 21:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.640 21:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.640 21:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.640 21:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:27.640 21:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:27.901 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:27.901 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.901 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.901 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:27.901 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:27.901 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.901 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.901 21:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.901 21:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.901 21:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.901 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.902 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.162 00:19:28.162 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.162 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.162 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.422 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.422 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.422 21:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.422 21:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.422 21:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.422 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.422 { 00:19:28.422 "cntlid": 115, 00:19:28.422 "qid": 0, 00:19:28.422 "state": "enabled", 00:19:28.422 "thread": "nvmf_tgt_poll_group_000", 00:19:28.422 "listen_address": { 00:19:28.422 "trtype": "TCP", 00:19:28.422 "adrfam": "IPv4", 00:19:28.422 "traddr": "10.0.0.2", 00:19:28.422 "trsvcid": "4420" 00:19:28.422 }, 00:19:28.422 "peer_address": { 00:19:28.422 "trtype": "TCP", 00:19:28.422 "adrfam": "IPv4", 00:19:28.422 "traddr": "10.0.0.1", 00:19:28.422 "trsvcid": "60524" 00:19:28.422 }, 00:19:28.422 "auth": { 00:19:28.422 "state": "completed", 00:19:28.422 "digest": "sha512", 00:19:28.422 "dhgroup": "ffdhe3072" 00:19:28.422 } 00:19:28.422 } 00:19:28.422 ]' 00:19:28.422 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.422 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.422 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.422 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:28.422 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.422 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.422 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.422 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.683 21:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MzM3OTQ3YjU0N2Y4MjNhMmIxN2QwZmEyNzQzYzEwYTdYqBUu: --dhchap-ctrl-secret DHHC-1:02:ZDI4YjYyNjAzMDU4NmNmNzZmYmExOGVhOGZlY2FjMjFjMjVhNGQwNmNhMjEzOWZh+vE8Ug==: 00:19:29.256 21:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.256 21:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:29.256 21:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.256 21:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.256 21:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.256 21:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.256 21:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:29.256 21:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:29.517 21:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:29.517 21:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.517 21:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:29.517 21:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:29.517 21:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:29.517 21:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.517 21:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.517 21:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.517 21:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.517 21:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.517 21:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.517 21:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.777 00:19:29.777 21:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.777 21:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.777 21:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.777 21:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.777 21:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.777 21:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.777 21:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.038 21:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.038 21:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.038 { 00:19:30.038 "cntlid": 117, 00:19:30.038 "qid": 0, 00:19:30.038 "state": "enabled", 00:19:30.038 "thread": "nvmf_tgt_poll_group_000", 00:19:30.038 "listen_address": { 00:19:30.038 "trtype": "TCP", 00:19:30.038 "adrfam": "IPv4", 00:19:30.038 "traddr": "10.0.0.2", 00:19:30.038 "trsvcid": "4420" 00:19:30.038 }, 00:19:30.038 "peer_address": { 00:19:30.038 "trtype": "TCP", 00:19:30.038 "adrfam": "IPv4", 00:19:30.038 "traddr": "10.0.0.1", 00:19:30.038 "trsvcid": "60552" 00:19:30.038 }, 00:19:30.038 "auth": { 00:19:30.038 "state": "completed", 00:19:30.038 "digest": "sha512", 00:19:30.038 "dhgroup": "ffdhe3072" 00:19:30.038 } 00:19:30.038 } 00:19:30.038 ]' 00:19:30.038 21:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.038 21:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.038 21:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.038 21:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:30.038 21:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.038 21:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.038 21:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.038 21:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.299 21:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:N2JkN2U4MWNjOTg2MmFkNWY3OWEwNTYzYTVmODNiMzllNDFjNDllZWJhOWVhYjE5tCdc0A==: --dhchap-ctrl-secret DHHC-1:01:ZTRjYzE0YmJjNjE4NmFkY2E2YTdjMTljZTM0MzczYTn7P0A8: 00:19:30.870 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.132 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:31.132 21:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.132 21:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.132 21:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.132 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.132 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:31.133 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:31.133 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:31.133 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.133 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:31.133 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:31.133 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:31.133 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.133 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:31.133 21:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.133 21:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.133 21:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.133 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.133 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.394 00:19:31.394 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.394 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.394 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.655 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.655 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.655 21:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.655 21:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.655 21:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.655 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.655 { 00:19:31.655 "cntlid": 119, 00:19:31.655 "qid": 0, 00:19:31.655 "state": "enabled", 00:19:31.655 "thread": "nvmf_tgt_poll_group_000", 00:19:31.655 "listen_address": { 00:19:31.655 "trtype": "TCP", 00:19:31.655 "adrfam": "IPv4", 00:19:31.655 "traddr": "10.0.0.2", 00:19:31.655 "trsvcid": "4420" 00:19:31.655 }, 00:19:31.655 "peer_address": { 00:19:31.655 "trtype": "TCP", 00:19:31.655 "adrfam": "IPv4", 00:19:31.655 "traddr": "10.0.0.1", 00:19:31.655 "trsvcid": "60590" 00:19:31.655 }, 00:19:31.655 "auth": { 00:19:31.655 "state": "completed", 00:19:31.655 "digest": "sha512", 00:19:31.655 "dhgroup": "ffdhe3072" 00:19:31.655 } 00:19:31.655 } 00:19:31.655 ]' 00:19:31.655 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.655 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.655 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.655 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:31.655 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.655 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.655 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.655 21:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.916 21:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDYxNzVkMzRlNWU5MWRkNWRlNGNiZTBmYjlkYTk3ZWI0Yzc4ZTQ1OTVmNzkxZTQwODU3YWJhMDBiOGI4NjE1NBaAYtM=: 00:19:32.858 21:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.858 21:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:32.858 21:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.858 21:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.858 21:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.858 21:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.858 21:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.858 21:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:32.858 21:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:32.858 21:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:32.858 21:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.858 21:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:32.858 21:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:32.858 21:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:32.858 21:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.858 21:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.858 21:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.858 21:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.858 21:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.859 21:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.859 21:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.119 00:19:33.119 21:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.119 21:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.119 21:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.380 21:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.380 21:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.380 21:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.380 21:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.380 21:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.380 21:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.380 { 00:19:33.380 "cntlid": 121, 00:19:33.380 "qid": 0, 00:19:33.380 "state": "enabled", 00:19:33.380 "thread": "nvmf_tgt_poll_group_000", 00:19:33.380 "listen_address": { 00:19:33.380 "trtype": "TCP", 00:19:33.380 "adrfam": "IPv4", 00:19:33.380 "traddr": "10.0.0.2", 00:19:33.380 "trsvcid": "4420" 00:19:33.380 }, 00:19:33.380 "peer_address": { 00:19:33.380 "trtype": "TCP", 00:19:33.380 "adrfam": "IPv4", 00:19:33.380 "traddr": "10.0.0.1", 00:19:33.380 "trsvcid": "60612" 00:19:33.380 }, 00:19:33.380 "auth": { 00:19:33.380 "state": "completed", 00:19:33.380 "digest": "sha512", 00:19:33.380 "dhgroup": "ffdhe4096" 00:19:33.380 } 00:19:33.380 } 00:19:33.380 ]' 00:19:33.380 21:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.380 21:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.380 21:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.380 21:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:33.380 21:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.380 21:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.380 21:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.380 21:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.641 21:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGJkNjA1YmZkOTgwNGJhZDYwYWVkYzQwN2I2OTgxMGEyYzIyODQ2NzBjMmY4ZjhiCQgn+w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ5NWViMWMxZmVmZTRiYjY5YWY1MDM0ZDJiZTBjOGJhNzRiMjAwMTI2ZjlmZGQ2OWNmNWYxNmFkMzVkNTgwNvt1FDc=: 00:19:34.213 21:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.213 21:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:34.213 21:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.213 21:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.213 21:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.213 21:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.213 21:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:34.213 21:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:34.474 21:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:34.474 21:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.474 21:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:34.474 21:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:34.474 21:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:34.474 21:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.474 21:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.474 21:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.474 21:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.474 21:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.474 21:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.474 21:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.735 00:19:34.735 21:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.735 21:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.735 21:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.995 21:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.995 21:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.995 21:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.995 21:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.995 21:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.995 21:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.995 { 00:19:34.995 "cntlid": 123, 00:19:34.995 "qid": 0, 00:19:34.995 "state": "enabled", 00:19:34.995 "thread": "nvmf_tgt_poll_group_000", 00:19:34.995 "listen_address": { 00:19:34.995 "trtype": "TCP", 00:19:34.995 "adrfam": "IPv4", 00:19:34.995 "traddr": "10.0.0.2", 00:19:34.995 "trsvcid": "4420" 00:19:34.995 }, 00:19:34.995 "peer_address": { 00:19:34.995 "trtype": "TCP", 00:19:34.995 "adrfam": "IPv4", 00:19:34.995 "traddr": "10.0.0.1", 00:19:34.995 "trsvcid": "60644" 00:19:34.995 }, 00:19:34.995 "auth": { 00:19:34.995 "state": "completed", 00:19:34.995 "digest": "sha512", 00:19:34.995 "dhgroup": "ffdhe4096" 00:19:34.995 } 00:19:34.995 } 00:19:34.995 ]' 00:19:34.995 21:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.995 21:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.995 21:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.995 21:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:34.995 21:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.995 21:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.995 21:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.995 21:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.255 21:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MzM3OTQ3YjU0N2Y4MjNhMmIxN2QwZmEyNzQzYzEwYTdYqBUu: --dhchap-ctrl-secret DHHC-1:02:ZDI4YjYyNjAzMDU4NmNmNzZmYmExOGVhOGZlY2FjMjFjMjVhNGQwNmNhMjEzOWZh+vE8Ug==: 00:19:35.826 21:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.826 21:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:35.826 21:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.826 21:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.826 21:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.826 21:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.826 21:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:35.826 21:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:36.093 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:36.093 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.093 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.093 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:36.093 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:36.093 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.093 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.093 21:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.093 21:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.093 21:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.093 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.093 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.093 00:19:36.353 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.353 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.353 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.353 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.353 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.353 21:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.353 21:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.353 21:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.353 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.353 { 00:19:36.353 "cntlid": 125, 00:19:36.353 "qid": 0, 00:19:36.353 "state": "enabled", 00:19:36.353 "thread": "nvmf_tgt_poll_group_000", 00:19:36.353 "listen_address": { 00:19:36.353 "trtype": "TCP", 00:19:36.353 "adrfam": "IPv4", 00:19:36.353 "traddr": "10.0.0.2", 00:19:36.353 "trsvcid": "4420" 00:19:36.353 }, 00:19:36.353 "peer_address": { 00:19:36.353 "trtype": "TCP", 00:19:36.353 "adrfam": "IPv4", 00:19:36.353 "traddr": "10.0.0.1", 00:19:36.353 "trsvcid": "57862" 00:19:36.353 }, 00:19:36.353 "auth": { 00:19:36.353 "state": "completed", 00:19:36.353 "digest": "sha512", 00:19:36.353 "dhgroup": "ffdhe4096" 00:19:36.353 } 00:19:36.353 } 00:19:36.353 ]' 00:19:36.353 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.353 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.353 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.613 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:36.613 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.613 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.613 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.613 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.613 21:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:N2JkN2U4MWNjOTg2MmFkNWY3OWEwNTYzYTVmODNiMzllNDFjNDllZWJhOWVhYjE5tCdc0A==: --dhchap-ctrl-secret DHHC-1:01:ZTRjYzE0YmJjNjE4NmFkY2E2YTdjMTljZTM0MzczYTn7P0A8: 00:19:37.552 21:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.552 21:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:37.552 21:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.552 21:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.552 21:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.552 21:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.552 21:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:37.552 21:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:37.552 21:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:37.552 21:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.552 21:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:37.552 21:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:37.552 21:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:37.552 21:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.552 21:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:37.552 21:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.552 21:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.552 21:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.552 21:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.552 21:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.813 00:19:37.813 21:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.813 21:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.813 21:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.073 21:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.073 21:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.073 21:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.073 21:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.073 21:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.073 21:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.073 { 00:19:38.073 "cntlid": 127, 00:19:38.073 "qid": 0, 00:19:38.073 "state": "enabled", 00:19:38.073 "thread": "nvmf_tgt_poll_group_000", 00:19:38.073 "listen_address": { 00:19:38.073 "trtype": "TCP", 00:19:38.073 "adrfam": "IPv4", 00:19:38.073 "traddr": "10.0.0.2", 00:19:38.073 "trsvcid": "4420" 00:19:38.073 }, 00:19:38.073 "peer_address": { 00:19:38.073 "trtype": "TCP", 00:19:38.073 "adrfam": "IPv4", 00:19:38.073 "traddr": "10.0.0.1", 00:19:38.073 "trsvcid": "57890" 00:19:38.073 }, 00:19:38.073 "auth": { 00:19:38.073 "state": "completed", 00:19:38.073 "digest": "sha512", 00:19:38.073 "dhgroup": "ffdhe4096" 00:19:38.073 } 00:19:38.073 } 00:19:38.073 ]' 00:19:38.073 21:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.073 21:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.073 21:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.073 21:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:38.073 21:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.073 21:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.073 21:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.073 21:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.333 21:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDYxNzVkMzRlNWU5MWRkNWRlNGNiZTBmYjlkYTk3ZWI0Yzc4ZTQ1OTVmNzkxZTQwODU3YWJhMDBiOGI4NjE1NBaAYtM=: 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.273 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.532 00:19:39.533 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.533 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.533 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.792 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.792 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.792 21:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.792 21:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.792 21:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.792 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.792 { 00:19:39.792 "cntlid": 129, 00:19:39.792 "qid": 0, 00:19:39.792 "state": "enabled", 00:19:39.792 "thread": "nvmf_tgt_poll_group_000", 00:19:39.792 "listen_address": { 00:19:39.792 "trtype": "TCP", 00:19:39.792 "adrfam": "IPv4", 00:19:39.792 "traddr": "10.0.0.2", 00:19:39.792 "trsvcid": "4420" 00:19:39.792 }, 00:19:39.792 "peer_address": { 00:19:39.792 "trtype": "TCP", 00:19:39.792 "adrfam": "IPv4", 00:19:39.792 "traddr": "10.0.0.1", 00:19:39.792 "trsvcid": "57910" 00:19:39.792 }, 00:19:39.792 "auth": { 00:19:39.792 "state": "completed", 00:19:39.792 "digest": "sha512", 00:19:39.792 "dhgroup": "ffdhe6144" 00:19:39.792 } 00:19:39.792 } 00:19:39.792 ]' 00:19:39.792 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.792 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.792 21:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.792 21:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:39.792 21:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.792 21:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.792 21:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.792 21:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.052 21:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGJkNjA1YmZkOTgwNGJhZDYwYWVkYzQwN2I2OTgxMGEyYzIyODQ2NzBjMmY4ZjhiCQgn+w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ5NWViMWMxZmVmZTRiYjY5YWY1MDM0ZDJiZTBjOGJhNzRiMjAwMTI2ZjlmZGQ2OWNmNWYxNmFkMzVkNTgwNvt1FDc=: 00:19:40.992 21:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.992 21:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:40.992 21:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.992 21:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.992 21:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.992 21:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.992 21:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.992 21:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.992 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:40.992 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.992 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:40.992 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:40.992 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:40.992 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.992 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.992 21:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.992 21:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.992 21:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.992 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.992 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.252 00:19:41.252 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.252 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.252 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.512 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.512 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.512 21:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.512 21:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.512 21:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.512 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.512 { 00:19:41.512 "cntlid": 131, 00:19:41.512 "qid": 0, 00:19:41.512 "state": "enabled", 00:19:41.512 "thread": "nvmf_tgt_poll_group_000", 00:19:41.512 "listen_address": { 00:19:41.512 "trtype": "TCP", 00:19:41.512 "adrfam": "IPv4", 00:19:41.512 "traddr": "10.0.0.2", 00:19:41.512 "trsvcid": "4420" 00:19:41.512 }, 00:19:41.512 "peer_address": { 00:19:41.512 "trtype": "TCP", 00:19:41.512 "adrfam": "IPv4", 00:19:41.512 "traddr": "10.0.0.1", 00:19:41.512 "trsvcid": "57942" 00:19:41.512 }, 00:19:41.512 "auth": { 00:19:41.512 "state": "completed", 00:19:41.512 "digest": "sha512", 00:19:41.512 "dhgroup": "ffdhe6144" 00:19:41.512 } 00:19:41.512 } 00:19:41.512 ]' 00:19:41.512 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.512 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.512 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.512 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:41.512 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.512 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.512 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.512 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.772 21:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MzM3OTQ3YjU0N2Y4MjNhMmIxN2QwZmEyNzQzYzEwYTdYqBUu: --dhchap-ctrl-secret DHHC-1:02:ZDI4YjYyNjAzMDU4NmNmNzZmYmExOGVhOGZlY2FjMjFjMjVhNGQwNmNhMjEzOWZh+vE8Ug==: 00:19:42.715 21:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.715 21:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:42.715 21:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.715 21:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.715 21:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.715 21:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.715 21:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:42.715 21:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:42.715 21:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:42.715 21:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.715 21:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:42.715 21:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:42.715 21:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:42.715 21:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.715 21:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.715 21:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.715 21:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.715 21:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.715 21:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.715 21:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.976 00:19:42.976 21:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.976 21:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.976 21:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.236 21:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.236 21:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.236 21:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.236 21:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.236 21:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.236 21:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.236 { 00:19:43.236 "cntlid": 133, 00:19:43.236 "qid": 0, 00:19:43.236 "state": "enabled", 00:19:43.236 "thread": "nvmf_tgt_poll_group_000", 00:19:43.236 "listen_address": { 00:19:43.236 "trtype": "TCP", 00:19:43.236 "adrfam": "IPv4", 00:19:43.236 "traddr": "10.0.0.2", 00:19:43.236 "trsvcid": "4420" 00:19:43.236 }, 00:19:43.236 "peer_address": { 00:19:43.236 "trtype": "TCP", 00:19:43.236 "adrfam": "IPv4", 00:19:43.236 "traddr": "10.0.0.1", 00:19:43.236 "trsvcid": "57980" 00:19:43.236 }, 00:19:43.236 "auth": { 00:19:43.236 "state": "completed", 00:19:43.236 "digest": "sha512", 00:19:43.236 "dhgroup": "ffdhe6144" 00:19:43.236 } 00:19:43.236 } 00:19:43.236 ]' 00:19:43.236 21:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.236 21:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.236 21:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.236 21:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:43.236 21:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.236 21:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.236 21:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.236 21:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.496 21:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:N2JkN2U4MWNjOTg2MmFkNWY3OWEwNTYzYTVmODNiMzllNDFjNDllZWJhOWVhYjE5tCdc0A==: --dhchap-ctrl-secret DHHC-1:01:ZTRjYzE0YmJjNjE4NmFkY2E2YTdjMTljZTM0MzczYTn7P0A8: 00:19:44.068 21:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.328 21:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:44.328 21:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.328 21:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.328 21:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.328 21:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.328 21:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:44.328 21:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:44.328 21:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:44.328 21:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.328 21:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:44.328 21:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:44.328 21:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:44.328 21:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.328 21:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:44.328 21:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.328 21:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.328 21:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.328 21:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.328 21:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.899 00:19:44.899 21:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.899 21:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.899 21:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.899 21:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.899 21:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.899 21:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.899 21:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.899 21:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.899 21:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.899 { 00:19:44.899 "cntlid": 135, 00:19:44.899 "qid": 0, 00:19:44.899 "state": "enabled", 00:19:44.899 "thread": "nvmf_tgt_poll_group_000", 00:19:44.899 "listen_address": { 00:19:44.899 "trtype": "TCP", 00:19:44.899 "adrfam": "IPv4", 00:19:44.899 "traddr": "10.0.0.2", 00:19:44.899 "trsvcid": "4420" 00:19:44.899 }, 00:19:44.899 "peer_address": { 00:19:44.899 "trtype": "TCP", 00:19:44.899 "adrfam": "IPv4", 00:19:44.899 "traddr": "10.0.0.1", 00:19:44.899 "trsvcid": "58018" 00:19:44.899 }, 00:19:44.899 "auth": { 00:19:44.899 "state": "completed", 00:19:44.899 "digest": "sha512", 00:19:44.899 "dhgroup": "ffdhe6144" 00:19:44.899 } 00:19:44.899 } 00:19:44.899 ]' 00:19:44.899 21:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.899 21:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.899 21:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.899 21:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:44.899 21:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.179 21:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.179 21:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.179 21:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.180 21:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDYxNzVkMzRlNWU5MWRkNWRlNGNiZTBmYjlkYTk3ZWI0Yzc4ZTQ1OTVmNzkxZTQwODU3YWJhMDBiOGI4NjE1NBaAYtM=: 00:19:45.842 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.842 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:45.842 21:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.842 21:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.842 21:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.842 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:45.842 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.842 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:45.843 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:46.103 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:46.103 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.103 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:46.103 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:46.103 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:46.103 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.103 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.103 21:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.103 21:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.103 21:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.103 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.103 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.673 00:19:46.673 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.673 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.673 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.934 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.934 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.934 21:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.934 21:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.934 21:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.934 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.934 { 00:19:46.934 "cntlid": 137, 00:19:46.934 "qid": 0, 00:19:46.934 "state": "enabled", 00:19:46.934 "thread": "nvmf_tgt_poll_group_000", 00:19:46.934 "listen_address": { 00:19:46.934 "trtype": "TCP", 00:19:46.934 "adrfam": "IPv4", 00:19:46.934 "traddr": "10.0.0.2", 00:19:46.934 "trsvcid": "4420" 00:19:46.934 }, 00:19:46.934 "peer_address": { 00:19:46.934 "trtype": "TCP", 00:19:46.934 "adrfam": "IPv4", 00:19:46.934 "traddr": "10.0.0.1", 00:19:46.934 "trsvcid": "47034" 00:19:46.934 }, 00:19:46.934 "auth": { 00:19:46.934 "state": "completed", 00:19:46.934 "digest": "sha512", 00:19:46.934 "dhgroup": "ffdhe8192" 00:19:46.934 } 00:19:46.934 } 00:19:46.934 ]' 00:19:46.934 21:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.934 21:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.934 21:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.934 21:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:46.934 21:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.934 21:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.934 21:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.934 21:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.195 21:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGJkNjA1YmZkOTgwNGJhZDYwYWVkYzQwN2I2OTgxMGEyYzIyODQ2NzBjMmY4ZjhiCQgn+w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ5NWViMWMxZmVmZTRiYjY5YWY1MDM0ZDJiZTBjOGJhNzRiMjAwMTI2ZjlmZGQ2OWNmNWYxNmFkMzVkNTgwNvt1FDc=: 00:19:47.766 21:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.766 21:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:47.766 21:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.766 21:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.766 21:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.766 21:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.766 21:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:47.766 21:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:48.028 21:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:48.028 21:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.028 21:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:48.028 21:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:48.028 21:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:48.028 21:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.028 21:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.028 21:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.028 21:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.028 21:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.028 21:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.028 21:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.601 00:19:48.601 21:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.601 21:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.601 21:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.601 21:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.601 21:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.601 21:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.601 21:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.601 21:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.601 21:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.601 { 00:19:48.601 "cntlid": 139, 00:19:48.601 "qid": 0, 00:19:48.601 "state": "enabled", 00:19:48.601 "thread": "nvmf_tgt_poll_group_000", 00:19:48.601 "listen_address": { 00:19:48.601 "trtype": "TCP", 00:19:48.601 "adrfam": "IPv4", 00:19:48.601 "traddr": "10.0.0.2", 00:19:48.601 "trsvcid": "4420" 00:19:48.601 }, 00:19:48.601 "peer_address": { 00:19:48.601 "trtype": "TCP", 00:19:48.601 "adrfam": "IPv4", 00:19:48.601 "traddr": "10.0.0.1", 00:19:48.601 "trsvcid": "47070" 00:19:48.601 }, 00:19:48.601 "auth": { 00:19:48.601 "state": "completed", 00:19:48.601 "digest": "sha512", 00:19:48.601 "dhgroup": "ffdhe8192" 00:19:48.601 } 00:19:48.601 } 00:19:48.601 ]' 00:19:48.601 21:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.862 21:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:48.862 21:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.862 21:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:48.862 21:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.862 21:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.862 21:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.862 21:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.123 21:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MzM3OTQ3YjU0N2Y4MjNhMmIxN2QwZmEyNzQzYzEwYTdYqBUu: --dhchap-ctrl-secret DHHC-1:02:ZDI4YjYyNjAzMDU4NmNmNzZmYmExOGVhOGZlY2FjMjFjMjVhNGQwNmNhMjEzOWZh+vE8Ug==: 00:19:49.695 21:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.695 21:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:49.695 21:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.695 21:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.695 21:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.695 21:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.695 21:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:49.695 21:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:49.956 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:49.956 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.956 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:49.956 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:49.957 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:49.957 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.957 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.957 21:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.957 21:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.957 21:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.957 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.957 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.529 00:19:50.529 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.529 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.529 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.529 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.529 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.529 21:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.529 21:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.529 21:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.529 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.529 { 00:19:50.529 "cntlid": 141, 00:19:50.529 "qid": 0, 00:19:50.529 "state": "enabled", 00:19:50.529 "thread": "nvmf_tgt_poll_group_000", 00:19:50.529 "listen_address": { 00:19:50.529 "trtype": "TCP", 00:19:50.529 "adrfam": "IPv4", 00:19:50.529 "traddr": "10.0.0.2", 00:19:50.529 "trsvcid": "4420" 00:19:50.529 }, 00:19:50.529 "peer_address": { 00:19:50.529 "trtype": "TCP", 00:19:50.529 "adrfam": "IPv4", 00:19:50.529 "traddr": "10.0.0.1", 00:19:50.529 "trsvcid": "47092" 00:19:50.529 }, 00:19:50.529 "auth": { 00:19:50.529 "state": "completed", 00:19:50.529 "digest": "sha512", 00:19:50.529 "dhgroup": "ffdhe8192" 00:19:50.529 } 00:19:50.530 } 00:19:50.530 ]' 00:19:50.530 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.530 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.530 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.790 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:50.790 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.790 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.790 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.790 21:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.790 21:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:N2JkN2U4MWNjOTg2MmFkNWY3OWEwNTYzYTVmODNiMzllNDFjNDllZWJhOWVhYjE5tCdc0A==: --dhchap-ctrl-secret DHHC-1:01:ZTRjYzE0YmJjNjE4NmFkY2E2YTdjMTljZTM0MzczYTn7P0A8: 00:19:51.733 21:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.733 21:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:51.733 21:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.733 21:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.733 21:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.733 21:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.733 21:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:51.733 21:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:51.733 21:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:51.733 21:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.733 21:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:51.733 21:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:51.733 21:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:51.733 21:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.733 21:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:51.733 21:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.733 21:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.733 21:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.733 21:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.733 21:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.305 00:19:52.305 21:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.305 21:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.305 21:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.566 21:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.566 21:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.566 21:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.566 21:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.566 21:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.566 21:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.566 { 00:19:52.566 "cntlid": 143, 00:19:52.566 "qid": 0, 00:19:52.566 "state": "enabled", 00:19:52.566 "thread": "nvmf_tgt_poll_group_000", 00:19:52.566 "listen_address": { 00:19:52.566 "trtype": "TCP", 00:19:52.566 "adrfam": "IPv4", 00:19:52.566 "traddr": "10.0.0.2", 00:19:52.566 "trsvcid": "4420" 00:19:52.566 }, 00:19:52.566 "peer_address": { 00:19:52.566 "trtype": "TCP", 00:19:52.566 "adrfam": "IPv4", 00:19:52.566 "traddr": "10.0.0.1", 00:19:52.566 "trsvcid": "47122" 00:19:52.566 }, 00:19:52.566 "auth": { 00:19:52.566 "state": "completed", 00:19:52.566 "digest": "sha512", 00:19:52.566 "dhgroup": "ffdhe8192" 00:19:52.566 } 00:19:52.566 } 00:19:52.566 ]' 00:19:52.566 21:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.566 21:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.566 21:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.566 21:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:52.566 21:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.566 21:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.566 21:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.566 21:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.827 21:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDYxNzVkMzRlNWU5MWRkNWRlNGNiZTBmYjlkYTk3ZWI0Yzc4ZTQ1OTVmNzkxZTQwODU3YWJhMDBiOGI4NjE1NBaAYtM=: 00:19:53.397 21:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.397 21:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:53.397 21:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.397 21:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.397 21:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.658 21:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:53.658 21:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:53.658 21:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:53.658 21:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:53.658 21:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:53.658 21:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:53.658 21:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:53.658 21:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.658 21:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:53.658 21:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:53.658 21:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:53.658 21:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.658 21:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.658 21:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.658 21:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.658 21:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.658 21:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.658 21:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.229 00:19:54.229 21:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.229 21:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.229 21:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.490 21:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.490 21:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.490 21:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.490 21:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.490 21:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.490 21:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.490 { 00:19:54.490 "cntlid": 145, 00:19:54.490 "qid": 0, 00:19:54.490 "state": "enabled", 00:19:54.490 "thread": "nvmf_tgt_poll_group_000", 00:19:54.490 "listen_address": { 00:19:54.490 "trtype": "TCP", 00:19:54.490 "adrfam": "IPv4", 00:19:54.490 "traddr": "10.0.0.2", 00:19:54.490 "trsvcid": "4420" 00:19:54.490 }, 00:19:54.490 "peer_address": { 00:19:54.490 "trtype": "TCP", 00:19:54.490 "adrfam": "IPv4", 00:19:54.490 "traddr": "10.0.0.1", 00:19:54.490 "trsvcid": "47136" 00:19:54.490 }, 00:19:54.490 "auth": { 00:19:54.490 "state": "completed", 00:19:54.491 "digest": "sha512", 00:19:54.491 "dhgroup": "ffdhe8192" 00:19:54.491 } 00:19:54.491 } 00:19:54.491 ]' 00:19:54.491 21:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.491 21:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.491 21:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.491 21:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:54.491 21:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.491 21:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.491 21:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.491 21:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.753 21:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGJkNjA1YmZkOTgwNGJhZDYwYWVkYzQwN2I2OTgxMGEyYzIyODQ2NzBjMmY4ZjhiCQgn+w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ5NWViMWMxZmVmZTRiYjY5YWY1MDM0ZDJiZTBjOGJhNzRiMjAwMTI2ZjlmZGQ2OWNmNWYxNmFkMzVkNTgwNvt1FDc=: 00:19:55.325 21:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.325 21:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:55.326 21:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.326 21:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.326 21:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.326 21:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:55.326 21:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.326 21:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.326 21:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.326 21:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:55.326 21:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:55.326 21:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:55.326 21:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:55.326 21:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.326 21:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:55.326 21:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.326 21:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:55.326 21:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:55.898 request: 00:19:55.898 { 00:19:55.898 "name": "nvme0", 00:19:55.898 "trtype": "tcp", 00:19:55.898 "traddr": "10.0.0.2", 00:19:55.898 "adrfam": "ipv4", 00:19:55.898 "trsvcid": "4420", 00:19:55.898 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:55.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:55.898 "prchk_reftag": false, 00:19:55.898 "prchk_guard": false, 00:19:55.898 "hdgst": false, 00:19:55.898 "ddgst": false, 00:19:55.898 "dhchap_key": "key2", 00:19:55.898 "method": "bdev_nvme_attach_controller", 00:19:55.898 "req_id": 1 00:19:55.898 } 00:19:55.898 Got JSON-RPC error response 00:19:55.898 response: 00:19:55.898 { 00:19:55.898 "code": -5, 00:19:55.898 "message": "Input/output error" 00:19:55.898 } 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:55.898 21:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:56.473 request: 00:19:56.473 { 00:19:56.473 "name": "nvme0", 00:19:56.473 "trtype": "tcp", 00:19:56.473 "traddr": "10.0.0.2", 00:19:56.473 "adrfam": "ipv4", 00:19:56.473 "trsvcid": "4420", 00:19:56.473 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:56.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:56.473 "prchk_reftag": false, 00:19:56.473 "prchk_guard": false, 00:19:56.473 "hdgst": false, 00:19:56.473 "ddgst": false, 00:19:56.473 "dhchap_key": "key1", 00:19:56.473 "dhchap_ctrlr_key": "ckey2", 00:19:56.473 "method": "bdev_nvme_attach_controller", 00:19:56.473 "req_id": 1 00:19:56.473 } 00:19:56.473 Got JSON-RPC error response 00:19:56.473 response: 00:19:56.473 { 00:19:56.473 "code": -5, 00:19:56.473 "message": "Input/output error" 00:19:56.473 } 00:19:56.473 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:56.473 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:56.473 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:56.473 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:56.473 21:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:56.473 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.473 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.473 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.474 21:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:56.474 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.474 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.474 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.474 21:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.474 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:56.474 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.474 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:56.474 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:56.474 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:56.474 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:56.474 21:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.474 21:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.043 request: 00:19:57.043 { 00:19:57.043 "name": "nvme0", 00:19:57.043 "trtype": "tcp", 00:19:57.043 "traddr": "10.0.0.2", 00:19:57.043 "adrfam": "ipv4", 00:19:57.043 "trsvcid": "4420", 00:19:57.043 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:57.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:57.043 "prchk_reftag": false, 00:19:57.043 "prchk_guard": false, 00:19:57.043 "hdgst": false, 00:19:57.043 "ddgst": false, 00:19:57.043 "dhchap_key": "key1", 00:19:57.043 "dhchap_ctrlr_key": "ckey1", 00:19:57.043 "method": "bdev_nvme_attach_controller", 00:19:57.043 "req_id": 1 00:19:57.043 } 00:19:57.043 Got JSON-RPC error response 00:19:57.043 response: 00:19:57.043 { 00:19:57.043 "code": -5, 00:19:57.043 "message": "Input/output error" 00:19:57.043 } 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1958695 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1958695 ']' 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1958695 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1958695 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1958695' 00:19:57.043 killing process with pid 1958695 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1958695 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1958695 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1984634 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1984634 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1984634 ']' 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:57.043 21:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.982 21:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.982 21:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:57.982 21:10:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:57.982 21:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:57.982 21:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.982 21:10:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.982 21:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:57.982 21:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1984634 00:19:57.982 21:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1984634 ']' 00:19:57.982 21:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.982 21:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:57.982 21:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.982 21:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:57.982 21:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.242 21:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.242 21:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:58.242 21:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:58.242 21:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.242 21:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.242 21:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.242 21:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:58.242 21:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.242 21:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:58.242 21:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:58.242 21:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:58.242 21:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.242 21:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:58.242 21:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.242 21:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.242 21:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.242 21:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.242 21:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.811 00:19:58.811 21:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.811 21:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.811 21:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.811 21:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.811 21:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.811 21:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.811 21:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.071 21:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.071 21:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.071 { 00:19:59.071 "cntlid": 1, 00:19:59.071 "qid": 0, 00:19:59.071 "state": "enabled", 00:19:59.071 "thread": "nvmf_tgt_poll_group_000", 00:19:59.071 "listen_address": { 00:19:59.071 "trtype": "TCP", 00:19:59.071 "adrfam": "IPv4", 00:19:59.071 "traddr": "10.0.0.2", 00:19:59.071 "trsvcid": "4420" 00:19:59.071 }, 00:19:59.071 "peer_address": { 00:19:59.071 "trtype": "TCP", 00:19:59.071 "adrfam": "IPv4", 00:19:59.071 "traddr": "10.0.0.1", 00:19:59.071 "trsvcid": "56564" 00:19:59.071 }, 00:19:59.071 "auth": { 00:19:59.071 "state": "completed", 00:19:59.071 "digest": "sha512", 00:19:59.071 "dhgroup": "ffdhe8192" 00:19:59.071 } 00:19:59.071 } 00:19:59.071 ]' 00:19:59.071 21:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.071 21:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:59.071 21:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.071 21:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:59.071 21:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.071 21:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.071 21:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.071 21:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.330 21:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDYxNzVkMzRlNWU5MWRkNWRlNGNiZTBmYjlkYTk3ZWI0Yzc4ZTQ1OTVmNzkxZTQwODU3YWJhMDBiOGI4NjE1NBaAYtM=: 00:19:59.900 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.900 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:59.900 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.900 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.900 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.900 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:59.900 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.900 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.900 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.900 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:59.900 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:00.161 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.161 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:00.161 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.161 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:00.161 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.161 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:00.161 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.161 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.161 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.161 request: 00:20:00.161 { 00:20:00.161 "name": "nvme0", 00:20:00.161 "trtype": "tcp", 00:20:00.161 "traddr": "10.0.0.2", 00:20:00.161 "adrfam": "ipv4", 00:20:00.161 "trsvcid": "4420", 00:20:00.161 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:00.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:00.161 "prchk_reftag": false, 00:20:00.161 "prchk_guard": false, 00:20:00.161 "hdgst": false, 00:20:00.161 "ddgst": false, 00:20:00.161 "dhchap_key": "key3", 00:20:00.161 "method": "bdev_nvme_attach_controller", 00:20:00.161 "req_id": 1 00:20:00.161 } 00:20:00.161 Got JSON-RPC error response 00:20:00.161 response: 00:20:00.161 { 00:20:00.161 "code": -5, 00:20:00.161 "message": "Input/output error" 00:20:00.161 } 00:20:00.161 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:00.161 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:00.161 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:00.161 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:00.161 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:00.161 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:00.423 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:00.423 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:00.423 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.423 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:00.423 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.423 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:00.423 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.423 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:00.423 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.423 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.423 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.684 request: 00:20:00.684 { 00:20:00.684 "name": "nvme0", 00:20:00.684 "trtype": "tcp", 00:20:00.684 "traddr": "10.0.0.2", 00:20:00.684 "adrfam": "ipv4", 00:20:00.684 "trsvcid": "4420", 00:20:00.684 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:00.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:00.684 "prchk_reftag": false, 00:20:00.684 "prchk_guard": false, 00:20:00.684 "hdgst": false, 00:20:00.684 "ddgst": false, 00:20:00.684 "dhchap_key": "key3", 00:20:00.684 "method": "bdev_nvme_attach_controller", 00:20:00.684 "req_id": 1 00:20:00.684 } 00:20:00.684 Got JSON-RPC error response 00:20:00.684 response: 00:20:00.684 { 00:20:00.684 "code": -5, 00:20:00.684 "message": "Input/output error" 00:20:00.684 } 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:00.684 21:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:00.945 request: 00:20:00.945 { 00:20:00.945 "name": "nvme0", 00:20:00.945 "trtype": "tcp", 00:20:00.945 "traddr": "10.0.0.2", 00:20:00.945 "adrfam": "ipv4", 00:20:00.945 "trsvcid": "4420", 00:20:00.945 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:00.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:00.945 "prchk_reftag": false, 00:20:00.945 "prchk_guard": false, 00:20:00.945 "hdgst": false, 00:20:00.945 "ddgst": false, 00:20:00.945 "dhchap_key": "key0", 00:20:00.945 "dhchap_ctrlr_key": "key1", 00:20:00.945 "method": "bdev_nvme_attach_controller", 00:20:00.945 "req_id": 1 00:20:00.945 } 00:20:00.945 Got JSON-RPC error response 00:20:00.945 response: 00:20:00.945 { 00:20:00.945 "code": -5, 00:20:00.945 "message": "Input/output error" 00:20:00.945 } 00:20:00.945 21:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:00.945 21:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:00.945 21:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:00.945 21:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:00.945 21:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:00.945 21:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:01.206 00:20:01.206 21:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:01.206 21:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:01.206 21:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.466 21:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.466 21:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.466 21:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.466 21:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:01.466 21:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:01.466 21:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1958727 00:20:01.466 21:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1958727 ']' 00:20:01.466 21:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1958727 00:20:01.466 21:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:01.466 21:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:01.466 21:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1958727 00:20:01.726 21:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:01.726 21:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:01.726 21:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1958727' 00:20:01.726 killing process with pid 1958727 00:20:01.726 21:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1958727 00:20:01.726 21:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1958727 00:20:01.726 21:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:01.726 21:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:01.726 21:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:01.726 21:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:01.726 21:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:01.726 21:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:01.726 21:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:01.726 rmmod nvme_tcp 00:20:01.726 rmmod nvme_fabrics 00:20:01.726 rmmod nvme_keyring 00:20:01.726 21:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:01.726 21:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:01.726 21:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:01.726 21:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1984634 ']' 00:20:01.726 21:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1984634 00:20:01.726 21:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1984634 ']' 00:20:01.726 21:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1984634 00:20:01.726 21:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:01.726 21:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:01.726 21:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1984634 00:20:01.985 21:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:01.985 21:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:01.985 21:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1984634' 00:20:01.985 killing process with pid 1984634 00:20:01.985 21:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1984634 00:20:01.986 21:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1984634 00:20:01.986 21:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:01.986 21:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:01.986 21:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:01.986 21:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:01.986 21:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:01.986 21:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.986 21:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.986 21:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.552 21:10:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:04.552 21:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.owO /tmp/spdk.key-sha256.M7B /tmp/spdk.key-sha384.ROH /tmp/spdk.key-sha512.K3c /tmp/spdk.key-sha512.ycf /tmp/spdk.key-sha384.6rD /tmp/spdk.key-sha256.bgQ '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:04.552 00:20:04.552 real 2m20.649s 00:20:04.552 user 5m11.791s 00:20:04.552 sys 0m19.357s 00:20:04.552 21:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:04.552 21:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.552 ************************************ 00:20:04.552 END TEST nvmf_auth_target 00:20:04.552 ************************************ 00:20:04.552 21:10:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:04.552 21:10:31 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:20:04.552 21:10:31 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:04.552 21:10:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:04.552 21:10:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:04.552 21:10:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:04.552 ************************************ 00:20:04.552 START TEST nvmf_bdevio_no_huge 00:20:04.552 ************************************ 00:20:04.552 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:04.552 * Looking for test storage... 00:20:04.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:04.552 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:04.552 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:04.552 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.552 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.552 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.552 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.552 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.552 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.552 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.552 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.552 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:04.553 21:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:12.695 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.695 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:12.696 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:12.696 Found net devices under 0000:31:00.0: cvl_0_0 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:12.696 Found net devices under 0000:31:00.1: cvl_0_1 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:12.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:20:12.696 00:20:12.696 --- 10.0.0.2 ping statistics --- 00:20:12.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.696 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:12.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:20:12.696 00:20:12.696 --- 10.0.0.1 ping statistics --- 00:20:12.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.696 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1990373 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1990373 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1990373 ']' 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.696 21:10:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:12.696 [2024-07-15 21:10:39.733911] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:20:12.696 [2024-07-15 21:10:39.733981] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:12.696 [2024-07-15 21:10:39.834199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:12.696 [2024-07-15 21:10:39.940769] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.696 [2024-07-15 21:10:39.940825] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.696 [2024-07-15 21:10:39.940833] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.696 [2024-07-15 21:10:39.940840] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.696 [2024-07-15 21:10:39.940846] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.696 [2024-07-15 21:10:39.941008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:12.696 [2024-07-15 21:10:39.941166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:12.696 [2024-07-15 21:10:39.941314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:12.696 [2024-07-15 21:10:39.941314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:13.267 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.267 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:13.267 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:13.267 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:13.267 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:13.528 [2024-07-15 21:10:40.592287] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:13.528 Malloc0 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:13.528 [2024-07-15 21:10:40.630015] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:13.528 { 00:20:13.528 "params": { 00:20:13.528 "name": "Nvme$subsystem", 00:20:13.528 "trtype": "$TEST_TRANSPORT", 00:20:13.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:13.528 "adrfam": "ipv4", 00:20:13.528 "trsvcid": "$NVMF_PORT", 00:20:13.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:13.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:13.528 "hdgst": ${hdgst:-false}, 00:20:13.528 "ddgst": ${ddgst:-false} 00:20:13.528 }, 00:20:13.528 "method": "bdev_nvme_attach_controller" 00:20:13.528 } 00:20:13.528 EOF 00:20:13.528 )") 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:13.528 21:10:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:13.528 "params": { 00:20:13.528 "name": "Nvme1", 00:20:13.528 "trtype": "tcp", 00:20:13.528 "traddr": "10.0.0.2", 00:20:13.528 "adrfam": "ipv4", 00:20:13.528 "trsvcid": "4420", 00:20:13.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:13.528 "hdgst": false, 00:20:13.528 "ddgst": false 00:20:13.528 }, 00:20:13.528 "method": "bdev_nvme_attach_controller" 00:20:13.528 }' 00:20:13.528 [2024-07-15 21:10:40.685287] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:20:13.528 [2024-07-15 21:10:40.685363] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1990603 ] 00:20:13.528 [2024-07-15 21:10:40.763135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:13.788 [2024-07-15 21:10:40.859273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.788 [2024-07-15 21:10:40.859332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.788 [2024-07-15 21:10:40.859336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.788 I/O targets: 00:20:13.788 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:13.788 00:20:13.788 00:20:13.788 CUnit - A unit testing framework for C - Version 2.1-3 00:20:13.788 http://cunit.sourceforge.net/ 00:20:13.788 00:20:13.788 00:20:13.788 Suite: bdevio tests on: Nvme1n1 00:20:13.788 Test: blockdev write read block ...passed 00:20:14.048 Test: blockdev write zeroes read block ...passed 00:20:14.049 Test: blockdev write zeroes read no split ...passed 00:20:14.049 Test: blockdev write zeroes read split ...passed 00:20:14.049 Test: blockdev write zeroes read split partial ...passed 00:20:14.049 Test: blockdev reset ...[2024-07-15 21:10:41.137480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:14.049 [2024-07-15 21:10:41.137545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1431b10 (9): Bad file descriptor 00:20:14.049 [2024-07-15 21:10:41.154773] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:14.049 passed 00:20:14.049 Test: blockdev write read 8 blocks ...passed 00:20:14.049 Test: blockdev write read size > 128k ...passed 00:20:14.049 Test: blockdev write read invalid size ...passed 00:20:14.049 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:14.049 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:14.049 Test: blockdev write read max offset ...passed 00:20:14.049 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:14.049 Test: blockdev writev readv 8 blocks ...passed 00:20:14.049 Test: blockdev writev readv 30 x 1block ...passed 00:20:14.309 Test: blockdev writev readv block ...passed 00:20:14.309 Test: blockdev writev readv size > 128k ...passed 00:20:14.309 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:14.309 Test: blockdev comparev and writev ...[2024-07-15 21:10:41.382118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.309 [2024-07-15 21:10:41.382143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.309 [2024-07-15 21:10:41.382154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.309 [2024-07-15 21:10:41.382160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.309 [2024-07-15 21:10:41.382673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.309 [2024-07-15 21:10:41.382682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:14.309 [2024-07-15 21:10:41.382692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.309 [2024-07-15 21:10:41.382697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:14.309 [2024-07-15 21:10:41.383155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.309 [2024-07-15 21:10:41.383163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:14.309 [2024-07-15 21:10:41.383173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.309 [2024-07-15 21:10:41.383178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:14.309 [2024-07-15 21:10:41.383643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.309 [2024-07-15 21:10:41.383652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:14.309 [2024-07-15 21:10:41.383662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.309 [2024-07-15 21:10:41.383668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:14.309 passed 00:20:14.309 Test: blockdev nvme passthru rw ...passed 00:20:14.309 Test: blockdev nvme passthru vendor specific ...[2024-07-15 21:10:41.469094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:14.309 [2024-07-15 21:10:41.469105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:14.309 [2024-07-15 21:10:41.469475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:14.309 [2024-07-15 21:10:41.469483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:14.309 [2024-07-15 21:10:41.469846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:14.309 [2024-07-15 21:10:41.469854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:14.309 [2024-07-15 21:10:41.470218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:14.309 [2024-07-15 21:10:41.470226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:14.309 passed 00:20:14.309 Test: blockdev nvme admin passthru ...passed 00:20:14.309 Test: blockdev copy ...passed 00:20:14.309 00:20:14.309 Run Summary: Type Total Ran Passed Failed Inactive 00:20:14.309 suites 1 1 n/a 0 0 00:20:14.309 tests 23 23 23 0 0 00:20:14.309 asserts 152 152 152 0 n/a 00:20:14.309 00:20:14.309 Elapsed time = 1.060 seconds 00:20:14.570 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:14.570 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.570 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.570 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.570 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:14.570 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:14.570 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:14.570 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:14.570 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:14.570 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:14.570 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:14.570 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:14.570 rmmod nvme_tcp 00:20:14.570 rmmod nvme_fabrics 00:20:14.570 rmmod nvme_keyring 00:20:14.570 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:14.570 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:14.570 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:14.570 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1990373 ']' 00:20:14.570 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1990373 00:20:14.570 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1990373 ']' 00:20:14.570 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1990373 00:20:14.570 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:14.831 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:14.831 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1990373 00:20:14.831 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:14.831 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:14.831 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1990373' 00:20:14.831 killing process with pid 1990373 00:20:14.831 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1990373 00:20:14.831 21:10:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1990373 00:20:15.091 21:10:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:15.091 21:10:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:15.091 21:10:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:15.091 21:10:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.091 21:10:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:15.091 21:10:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.091 21:10:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.091 21:10:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.631 21:10:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:17.631 00:20:17.631 real 0m12.995s 00:20:17.631 user 0m13.124s 00:20:17.631 sys 0m7.060s 00:20:17.631 21:10:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:17.631 21:10:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:17.631 ************************************ 00:20:17.631 END TEST nvmf_bdevio_no_huge 00:20:17.631 ************************************ 00:20:17.631 21:10:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:17.631 21:10:44 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:17.631 21:10:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:17.631 21:10:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:17.631 21:10:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:17.631 ************************************ 00:20:17.631 START TEST nvmf_tls 00:20:17.631 ************************************ 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:17.631 * Looking for test storage... 00:20:17.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:17.631 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:17.632 21:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:17.632 21:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:25.773 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:25.773 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.773 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:25.774 Found net devices under 0000:31:00.0: cvl_0_0 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:25.774 Found net devices under 0000:31:00.1: cvl_0_1 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:25.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:25.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:20:25.774 00:20:25.774 --- 10.0.0.2 ping statistics --- 00:20:25.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.774 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:25.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:20:25.774 00:20:25.774 --- 10.0.0.1 ping statistics --- 00:20:25.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.774 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1995457 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1995457 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1995457 ']' 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:25.774 21:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.774 [2024-07-15 21:10:52.782027] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:20:25.774 [2024-07-15 21:10:52.782114] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.774 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.774 [2024-07-15 21:10:52.880365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.774 [2024-07-15 21:10:52.971951] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.774 [2024-07-15 21:10:52.972013] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.774 [2024-07-15 21:10:52.972021] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.774 [2024-07-15 21:10:52.972027] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.774 [2024-07-15 21:10:52.972033] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.774 [2024-07-15 21:10:52.972058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.359 21:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:26.359 21:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:26.359 21:10:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:26.359 21:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:26.359 21:10:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.359 21:10:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.359 21:10:53 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:26.359 21:10:53 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:26.701 true 00:20:26.701 21:10:53 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:26.701 21:10:53 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:26.701 21:10:53 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:26.701 21:10:53 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:26.701 21:10:53 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:26.964 21:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:26.964 21:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:27.225 21:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:27.225 21:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:27.225 21:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:27.225 21:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:27.225 21:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:27.486 21:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:27.486 21:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:27.486 21:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:27.486 21:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:27.747 21:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:27.747 21:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:27.747 21:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:27.747 21:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:27.747 21:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:28.008 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:28.008 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:28.008 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:28.269 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:28.269 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:28.269 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:28.269 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:28.269 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:28.269 21:10:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:28.269 21:10:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:28.269 21:10:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:28.269 21:10:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:28.269 21:10:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:28.269 21:10:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:28.269 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:28.269 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:28.269 21:10:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:28.269 21:10:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:28.269 21:10:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:28.269 21:10:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:28.269 21:10:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:28.269 21:10:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:28.530 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:28.530 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:28.530 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.Vd1PaarX6F 00:20:28.530 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:28.530 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.K2Q2DfBJ4V 00:20:28.530 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:28.530 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:28.530 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.Vd1PaarX6F 00:20:28.530 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.K2Q2DfBJ4V 00:20:28.530 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:28.530 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:28.790 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.Vd1PaarX6F 00:20:28.790 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Vd1PaarX6F 00:20:28.790 21:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:29.050 [2024-07-15 21:10:56.088295] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.050 21:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:29.050 21:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:29.310 [2024-07-15 21:10:56.384998] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:29.310 [2024-07-15 21:10:56.385169] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.310 21:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:29.310 malloc0 00:20:29.310 21:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:29.570 21:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Vd1PaarX6F 00:20:29.570 [2024-07-15 21:10:56.819971] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:29.570 21:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Vd1PaarX6F 00:20:29.570 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.793 Initializing NVMe Controllers 00:20:41.793 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:41.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:41.793 Initialization complete. Launching workers. 00:20:41.793 ======================================================== 00:20:41.793 Latency(us) 00:20:41.793 Device Information : IOPS MiB/s Average min max 00:20:41.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18814.16 73.49 3401.82 1031.42 5795.14 00:20:41.793 ======================================================== 00:20:41.793 Total : 18814.16 73.49 3401.82 1031.42 5795.14 00:20:41.793 00:20:41.793 21:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Vd1PaarX6F 00:20:41.793 21:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:41.793 21:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:41.793 21:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:41.793 21:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Vd1PaarX6F' 00:20:41.793 21:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:41.793 21:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1998367 00:20:41.793 21:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:41.793 21:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1998367 /var/tmp/bdevperf.sock 00:20:41.793 21:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:41.793 21:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1998367 ']' 00:20:41.793 21:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.793 21:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:41.793 21:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.793 21:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:41.793 21:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.793 [2024-07-15 21:11:06.962630] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:20:41.793 [2024-07-15 21:11:06.962686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1998367 ] 00:20:41.793 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.793 [2024-07-15 21:11:07.018828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.793 [2024-07-15 21:11:07.071996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.793 21:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.793 21:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:41.793 21:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Vd1PaarX6F 00:20:41.793 [2024-07-15 21:11:07.869102] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:41.793 [2024-07-15 21:11:07.869173] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:41.793 TLSTESTn1 00:20:41.793 21:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:41.793 Running I/O for 10 seconds... 00:20:51.786 00:20:51.786 Latency(us) 00:20:51.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.786 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:51.786 Verification LBA range: start 0x0 length 0x2000 00:20:51.786 TLSTESTn1 : 10.01 6332.06 24.73 0.00 0.00 20183.77 4532.91 46530.56 00:20:51.786 =================================================================================================================== 00:20:51.786 Total : 6332.06 24.73 0.00 0.00 20183.77 4532.91 46530.56 00:20:51.786 0 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1998367 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1998367 ']' 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1998367 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1998367 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1998367' 00:20:51.786 killing process with pid 1998367 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1998367 00:20:51.786 Received shutdown signal, test time was about 10.000000 seconds 00:20:51.786 00:20:51.786 Latency(us) 00:20:51.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.786 =================================================================================================================== 00:20:51.786 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:51.786 [2024-07-15 21:11:18.164564] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1998367 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.K2Q2DfBJ4V 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.K2Q2DfBJ4V 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.K2Q2DfBJ4V 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.K2Q2DfBJ4V' 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2000489 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2000489 /var/tmp/bdevperf.sock 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2000489 ']' 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:51.786 21:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.786 [2024-07-15 21:11:18.340790] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:20:51.786 [2024-07-15 21:11:18.340847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2000489 ] 00:20:51.786 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.786 [2024-07-15 21:11:18.396990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.786 [2024-07-15 21:11:18.447114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.046 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:52.046 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:52.047 21:11:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.K2Q2DfBJ4V 00:20:52.047 [2024-07-15 21:11:19.244377] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:52.047 [2024-07-15 21:11:19.244446] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:52.047 [2024-07-15 21:11:19.252073] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:52.047 [2024-07-15 21:11:19.252403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x898eb0 (107): Transport endpoint is not connected 00:20:52.047 [2024-07-15 21:11:19.253397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x898eb0 (9): Bad file descriptor 00:20:52.047 [2024-07-15 21:11:19.254399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:52.047 [2024-07-15 21:11:19.254407] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:52.047 [2024-07-15 21:11:19.254415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:52.047 request: 00:20:52.047 { 00:20:52.047 "name": "TLSTEST", 00:20:52.047 "trtype": "tcp", 00:20:52.047 "traddr": "10.0.0.2", 00:20:52.047 "adrfam": "ipv4", 00:20:52.047 "trsvcid": "4420", 00:20:52.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:52.047 "prchk_reftag": false, 00:20:52.047 "prchk_guard": false, 00:20:52.047 "hdgst": false, 00:20:52.047 "ddgst": false, 00:20:52.047 "psk": "/tmp/tmp.K2Q2DfBJ4V", 00:20:52.047 "method": "bdev_nvme_attach_controller", 00:20:52.047 "req_id": 1 00:20:52.047 } 00:20:52.047 Got JSON-RPC error response 00:20:52.047 response: 00:20:52.047 { 00:20:52.047 "code": -5, 00:20:52.047 "message": "Input/output error" 00:20:52.047 } 00:20:52.047 21:11:19 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2000489 00:20:52.047 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2000489 ']' 00:20:52.047 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2000489 00:20:52.047 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:52.047 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:52.047 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2000489 00:20:52.308 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:52.308 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:52.308 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2000489' 00:20:52.308 killing process with pid 2000489 00:20:52.308 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2000489 00:20:52.308 Received shutdown signal, test time was about 10.000000 seconds 00:20:52.308 00:20:52.308 Latency(us) 00:20:52.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.308 =================================================================================================================== 00:20:52.308 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:52.308 [2024-07-15 21:11:19.346307] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:52.308 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2000489 00:20:52.308 21:11:19 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:52.308 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Vd1PaarX6F 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Vd1PaarX6F 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Vd1PaarX6F 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Vd1PaarX6F' 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2000828 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2000828 /var/tmp/bdevperf.sock 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2000828 ']' 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:52.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:52.309 21:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.309 [2024-07-15 21:11:19.504087] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:20:52.309 [2024-07-15 21:11:19.504139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2000828 ] 00:20:52.309 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.309 [2024-07-15 21:11:19.560428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.569 [2024-07-15 21:11:19.612233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.138 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:53.138 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:53.138 21:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.Vd1PaarX6F 00:20:53.138 [2024-07-15 21:11:20.421529] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:53.138 [2024-07-15 21:11:20.421598] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:53.138 [2024-07-15 21:11:20.425880] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:53.138 [2024-07-15 21:11:20.425901] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:53.138 [2024-07-15 21:11:20.425923] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:53.138 [2024-07-15 21:11:20.426565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9eaeb0 (107): Transport endpoint is not connected 00:20:53.138 [2024-07-15 21:11:20.427558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9eaeb0 (9): Bad file descriptor 00:20:53.399 [2024-07-15 21:11:20.428559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:53.399 [2024-07-15 21:11:20.428569] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:53.399 [2024-07-15 21:11:20.428576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:53.399 request: 00:20:53.399 { 00:20:53.399 "name": "TLSTEST", 00:20:53.399 "trtype": "tcp", 00:20:53.399 "traddr": "10.0.0.2", 00:20:53.399 "adrfam": "ipv4", 00:20:53.399 "trsvcid": "4420", 00:20:53.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.399 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:53.399 "prchk_reftag": false, 00:20:53.399 "prchk_guard": false, 00:20:53.399 "hdgst": false, 00:20:53.399 "ddgst": false, 00:20:53.399 "psk": "/tmp/tmp.Vd1PaarX6F", 00:20:53.399 "method": "bdev_nvme_attach_controller", 00:20:53.399 "req_id": 1 00:20:53.399 } 00:20:53.399 Got JSON-RPC error response 00:20:53.399 response: 00:20:53.399 { 00:20:53.399 "code": -5, 00:20:53.399 "message": "Input/output error" 00:20:53.399 } 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2000828 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2000828 ']' 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2000828 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2000828 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2000828' 00:20:53.399 killing process with pid 2000828 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2000828 00:20:53.399 Received shutdown signal, test time was about 10.000000 seconds 00:20:53.399 00:20:53.399 Latency(us) 00:20:53.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.399 =================================================================================================================== 00:20:53.399 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:53.399 [2024-07-15 21:11:20.514544] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2000828 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Vd1PaarX6F 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Vd1PaarX6F 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Vd1PaarX6F 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Vd1PaarX6F' 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2001016 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2001016 /var/tmp/bdevperf.sock 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2001016 ']' 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:53.399 21:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.399 [2024-07-15 21:11:20.672954] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:20:53.399 [2024-07-15 21:11:20.673007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001016 ] 00:20:53.662 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.662 [2024-07-15 21:11:20.731321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.662 [2024-07-15 21:11:20.785531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.232 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.232 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:54.232 21:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Vd1PaarX6F 00:20:54.493 [2024-07-15 21:11:21.590481] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:54.493 [2024-07-15 21:11:21.590547] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:54.493 [2024-07-15 21:11:21.597715] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:54.493 [2024-07-15 21:11:21.597733] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:54.493 [2024-07-15 21:11:21.597753] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:54.493 [2024-07-15 21:11:21.598748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b1eb0 (107): Transport endpoint is not connected 00:20:54.493 [2024-07-15 21:11:21.599741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b1eb0 (9): Bad file descriptor 00:20:54.493 [2024-07-15 21:11:21.600743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:54.493 [2024-07-15 21:11:21.600751] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:54.493 [2024-07-15 21:11:21.600760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:54.493 request: 00:20:54.493 { 00:20:54.493 "name": "TLSTEST", 00:20:54.493 "trtype": "tcp", 00:20:54.493 "traddr": "10.0.0.2", 00:20:54.493 "adrfam": "ipv4", 00:20:54.493 "trsvcid": "4420", 00:20:54.493 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:54.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:54.493 "prchk_reftag": false, 00:20:54.493 "prchk_guard": false, 00:20:54.493 "hdgst": false, 00:20:54.493 "ddgst": false, 00:20:54.493 "psk": "/tmp/tmp.Vd1PaarX6F", 00:20:54.493 "method": "bdev_nvme_attach_controller", 00:20:54.493 "req_id": 1 00:20:54.493 } 00:20:54.493 Got JSON-RPC error response 00:20:54.493 response: 00:20:54.493 { 00:20:54.493 "code": -5, 00:20:54.493 "message": "Input/output error" 00:20:54.493 } 00:20:54.493 21:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2001016 00:20:54.493 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2001016 ']' 00:20:54.493 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2001016 00:20:54.493 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:54.493 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:54.493 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2001016 00:20:54.493 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:54.493 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:54.493 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2001016' 00:20:54.493 killing process with pid 2001016 00:20:54.493 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2001016 00:20:54.493 Received shutdown signal, test time was about 10.000000 seconds 00:20:54.493 00:20:54.493 Latency(us) 00:20:54.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.493 =================================================================================================================== 00:20:54.493 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:54.493 [2024-07-15 21:11:21.686148] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:54.493 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2001016 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2001197 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2001197 /var/tmp/bdevperf.sock 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2001197 ']' 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.754 21:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.754 [2024-07-15 21:11:21.842694] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:20:54.754 [2024-07-15 21:11:21.842747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001197 ] 00:20:54.754 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.754 [2024-07-15 21:11:21.898933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.754 [2024-07-15 21:11:21.949169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:55.695 [2024-07-15 21:11:22.770608] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:55.695 [2024-07-15 21:11:22.772001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7605b0 (9): Bad file descriptor 00:20:55.695 [2024-07-15 21:11:22.773000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:55.695 [2024-07-15 21:11:22.773009] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:55.695 [2024-07-15 21:11:22.773016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:55.695 request: 00:20:55.695 { 00:20:55.695 "name": "TLSTEST", 00:20:55.695 "trtype": "tcp", 00:20:55.695 "traddr": "10.0.0.2", 00:20:55.695 "adrfam": "ipv4", 00:20:55.695 "trsvcid": "4420", 00:20:55.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:55.695 "prchk_reftag": false, 00:20:55.695 "prchk_guard": false, 00:20:55.695 "hdgst": false, 00:20:55.695 "ddgst": false, 00:20:55.695 "method": "bdev_nvme_attach_controller", 00:20:55.695 "req_id": 1 00:20:55.695 } 00:20:55.695 Got JSON-RPC error response 00:20:55.695 response: 00:20:55.695 { 00:20:55.695 "code": -5, 00:20:55.695 "message": "Input/output error" 00:20:55.695 } 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2001197 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2001197 ']' 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2001197 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2001197 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2001197' 00:20:55.695 killing process with pid 2001197 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2001197 00:20:55.695 Received shutdown signal, test time was about 10.000000 seconds 00:20:55.695 00:20:55.695 Latency(us) 00:20:55.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.695 =================================================================================================================== 00:20:55.695 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2001197 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1995457 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1995457 ']' 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1995457 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:55.695 21:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1995457 00:20:55.955 21:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:55.955 21:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:55.955 21:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1995457' 00:20:55.955 killing process with pid 1995457 00:20:55.955 21:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1995457 00:20:55.955 [2024-07-15 21:11:23.016422] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:55.955 21:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1995457 00:20:55.955 21:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.0YxTVOOO5N 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.0YxTVOOO5N 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2001546 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2001546 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2001546 ']' 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:55.956 21:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.956 [2024-07-15 21:11:23.234408] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:20:55.956 [2024-07-15 21:11:23.234465] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.215 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.215 [2024-07-15 21:11:23.322807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.215 [2024-07-15 21:11:23.377583] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.215 [2024-07-15 21:11:23.377614] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.215 [2024-07-15 21:11:23.377620] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.215 [2024-07-15 21:11:23.377624] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.215 [2024-07-15 21:11:23.377628] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.215 [2024-07-15 21:11:23.377642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.785 21:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:56.785 21:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:56.785 21:11:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:56.785 21:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:56.785 21:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.785 21:11:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.785 21:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.0YxTVOOO5N 00:20:56.785 21:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.0YxTVOOO5N 00:20:56.785 21:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:57.045 [2024-07-15 21:11:24.175631] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.045 21:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:57.304 21:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:57.304 [2024-07-15 21:11:24.468342] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:57.304 [2024-07-15 21:11:24.468519] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.304 21:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:57.563 malloc0 00:20:57.563 21:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:57.563 21:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0YxTVOOO5N 00:20:57.824 [2024-07-15 21:11:24.919436] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:57.824 21:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0YxTVOOO5N 00:20:57.824 21:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:57.824 21:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:57.824 21:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:57.824 21:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0YxTVOOO5N' 00:20:57.824 21:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:57.824 21:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:57.824 21:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2001907 00:20:57.824 21:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.824 21:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2001907 /var/tmp/bdevperf.sock 00:20:57.824 21:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2001907 ']' 00:20:57.824 21:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.824 21:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:57.824 21:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.824 21:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:57.824 21:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.824 [2024-07-15 21:11:24.965728] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:20:57.824 [2024-07-15 21:11:24.965777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001907 ] 00:20:57.824 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.824 [2024-07-15 21:11:25.020814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.824 [2024-07-15 21:11:25.073111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.084 21:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:58.084 21:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:58.084 21:11:25 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0YxTVOOO5N 00:20:58.084 [2024-07-15 21:11:25.292632] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.084 [2024-07-15 21:11:25.292689] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:58.084 TLSTESTn1 00:20:58.343 21:11:25 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:58.343 Running I/O for 10 seconds... 00:21:08.336 00:21:08.336 Latency(us) 00:21:08.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.336 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:08.336 Verification LBA range: start 0x0 length 0x2000 00:21:08.336 TLSTESTn1 : 10.06 5283.98 20.64 0.00 0.00 24174.45 4532.91 104420.69 00:21:08.336 =================================================================================================================== 00:21:08.336 Total : 5283.98 20.64 0.00 0.00 24174.45 4532.91 104420.69 00:21:08.336 0 00:21:08.336 21:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:08.336 21:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2001907 00:21:08.336 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2001907 ']' 00:21:08.336 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2001907 00:21:08.336 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:08.336 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:08.336 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2001907 00:21:08.336 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:08.336 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:08.336 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2001907' 00:21:08.336 killing process with pid 2001907 00:21:08.336 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2001907 00:21:08.336 Received shutdown signal, test time was about 10.000000 seconds 00:21:08.336 00:21:08.336 Latency(us) 00:21:08.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.336 =================================================================================================================== 00:21:08.336 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:08.336 [2024-07-15 21:11:35.625348] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:08.336 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2001907 00:21:08.596 21:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.0YxTVOOO5N 00:21:08.596 21:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0YxTVOOO5N 00:21:08.596 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:08.596 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0YxTVOOO5N 00:21:08.596 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:08.596 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:08.596 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:08.596 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:08.596 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0YxTVOOO5N 00:21:08.596 21:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:08.596 21:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:08.596 21:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:08.596 21:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0YxTVOOO5N' 00:21:08.596 21:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:08.596 21:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2003915 00:21:08.597 21:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:08.597 21:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2003915 /var/tmp/bdevperf.sock 00:21:08.597 21:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:08.597 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2003915 ']' 00:21:08.597 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:08.597 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:08.597 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:08.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:08.597 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:08.597 21:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.597 [2024-07-15 21:11:35.805325] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:21:08.597 [2024-07-15 21:11:35.805397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2003915 ] 00:21:08.597 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.597 [2024-07-15 21:11:35.862162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.857 [2024-07-15 21:11:35.913142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:09.428 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:09.428 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:09.428 21:11:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0YxTVOOO5N 00:21:09.428 [2024-07-15 21:11:36.710428] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:09.428 [2024-07-15 21:11:36.710469] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:09.429 [2024-07-15 21:11:36.710475] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.0YxTVOOO5N 00:21:09.429 request: 00:21:09.429 { 00:21:09.429 "name": "TLSTEST", 00:21:09.429 "trtype": "tcp", 00:21:09.429 "traddr": "10.0.0.2", 00:21:09.429 "adrfam": "ipv4", 00:21:09.429 "trsvcid": "4420", 00:21:09.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:09.429 "prchk_reftag": false, 00:21:09.429 "prchk_guard": false, 00:21:09.429 "hdgst": false, 00:21:09.429 "ddgst": false, 00:21:09.429 "psk": "/tmp/tmp.0YxTVOOO5N", 00:21:09.429 "method": "bdev_nvme_attach_controller", 00:21:09.429 "req_id": 1 00:21:09.429 } 00:21:09.429 Got JSON-RPC error response 00:21:09.429 response: 00:21:09.429 { 00:21:09.429 "code": -1, 00:21:09.429 "message": "Operation not permitted" 00:21:09.429 } 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2003915 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2003915 ']' 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2003915 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2003915 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2003915' 00:21:09.689 killing process with pid 2003915 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2003915 00:21:09.689 Received shutdown signal, test time was about 10.000000 seconds 00:21:09.689 00:21:09.689 Latency(us) 00:21:09.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.689 =================================================================================================================== 00:21:09.689 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2003915 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2001546 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2001546 ']' 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2001546 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:09.689 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:09.690 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2001546 00:21:09.690 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:09.690 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:09.690 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2001546' 00:21:09.690 killing process with pid 2001546 00:21:09.690 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2001546 00:21:09.690 [2024-07-15 21:11:36.955279] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:09.690 21:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2001546 00:21:09.950 21:11:37 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:09.950 21:11:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:09.950 21:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:09.950 21:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.950 21:11:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2004268 00:21:09.950 21:11:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2004268 00:21:09.950 21:11:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:09.950 21:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2004268 ']' 00:21:09.950 21:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.950 21:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:09.950 21:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.950 21:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:09.950 21:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.950 [2024-07-15 21:11:37.131017] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:21:09.950 [2024-07-15 21:11:37.131069] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.950 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.950 [2024-07-15 21:11:37.222484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.236 [2024-07-15 21:11:37.275562] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.236 [2024-07-15 21:11:37.275596] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.236 [2024-07-15 21:11:37.275604] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.236 [2024-07-15 21:11:37.275609] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.236 [2024-07-15 21:11:37.275613] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.236 [2024-07-15 21:11:37.275635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.807 21:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:10.807 21:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:10.807 21:11:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:10.807 21:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:10.807 21:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.807 21:11:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.807 21:11:37 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.0YxTVOOO5N 00:21:10.807 21:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:10.807 21:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.0YxTVOOO5N 00:21:10.807 21:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:10.807 21:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:10.807 21:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:10.807 21:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:10.807 21:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.0YxTVOOO5N 00:21:10.807 21:11:37 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.0YxTVOOO5N 00:21:10.807 21:11:37 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:10.807 [2024-07-15 21:11:38.073953] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.807 21:11:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:11.068 21:11:38 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:11.327 [2024-07-15 21:11:38.378693] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:11.327 [2024-07-15 21:11:38.378867] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.327 21:11:38 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:11.327 malloc0 00:21:11.327 21:11:38 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:11.588 21:11:38 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0YxTVOOO5N 00:21:11.588 [2024-07-15 21:11:38.813763] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:11.588 [2024-07-15 21:11:38.813785] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:11.588 [2024-07-15 21:11:38.813805] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:11.588 request: 00:21:11.588 { 00:21:11.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:11.588 "host": "nqn.2016-06.io.spdk:host1", 00:21:11.588 "psk": "/tmp/tmp.0YxTVOOO5N", 00:21:11.588 "method": "nvmf_subsystem_add_host", 00:21:11.588 "req_id": 1 00:21:11.588 } 00:21:11.588 Got JSON-RPC error response 00:21:11.588 response: 00:21:11.588 { 00:21:11.588 "code": -32603, 00:21:11.588 "message": "Internal error" 00:21:11.588 } 00:21:11.588 21:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:11.588 21:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:11.588 21:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:11.588 21:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:11.588 21:11:38 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2004268 00:21:11.588 21:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2004268 ']' 00:21:11.588 21:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2004268 00:21:11.588 21:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:11.588 21:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:11.588 21:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2004268 00:21:11.848 21:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:11.848 21:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:11.848 21:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2004268' 00:21:11.848 killing process with pid 2004268 00:21:11.848 21:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2004268 00:21:11.848 21:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2004268 00:21:11.848 21:11:39 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.0YxTVOOO5N 00:21:11.848 21:11:39 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:11.848 21:11:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:11.848 21:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:11.848 21:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.848 21:11:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2004630 00:21:11.848 21:11:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2004630 00:21:11.848 21:11:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:11.849 21:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2004630 ']' 00:21:11.849 21:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.849 21:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:11.849 21:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.849 21:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:11.849 21:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.849 [2024-07-15 21:11:39.066790] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:21:11.849 [2024-07-15 21:11:39.066850] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.849 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.109 [2024-07-15 21:11:39.153163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.109 [2024-07-15 21:11:39.207026] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.109 [2024-07-15 21:11:39.207059] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.109 [2024-07-15 21:11:39.207064] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.109 [2024-07-15 21:11:39.207069] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.109 [2024-07-15 21:11:39.207072] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.109 [2024-07-15 21:11:39.207087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.680 21:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:12.680 21:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:12.680 21:11:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:12.680 21:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:12.680 21:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.680 21:11:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.680 21:11:39 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.0YxTVOOO5N 00:21:12.680 21:11:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.0YxTVOOO5N 00:21:12.680 21:11:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:12.940 [2024-07-15 21:11:40.009477] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.940 21:11:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:12.940 21:11:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:13.200 [2024-07-15 21:11:40.297986] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:13.200 [2024-07-15 21:11:40.298152] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.200 21:11:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:13.200 malloc0 00:21:13.200 21:11:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:13.460 21:11:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0YxTVOOO5N 00:21:13.460 [2024-07-15 21:11:40.732733] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:13.460 21:11:40 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:13.460 21:11:40 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2004991 00:21:13.460 21:11:40 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:13.460 21:11:40 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2004991 /var/tmp/bdevperf.sock 00:21:13.460 21:11:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2004991 ']' 00:21:13.460 21:11:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:13.460 21:11:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:13.460 21:11:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:13.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:13.460 21:11:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:13.460 21:11:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.722 [2024-07-15 21:11:40.768588] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:21:13.722 [2024-07-15 21:11:40.768628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004991 ] 00:21:13.722 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.722 [2024-07-15 21:11:40.816830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.722 [2024-07-15 21:11:40.868917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.722 21:11:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:13.722 21:11:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:13.722 21:11:40 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0YxTVOOO5N 00:21:13.982 [2024-07-15 21:11:41.072418] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:13.982 [2024-07-15 21:11:41.072484] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:13.982 TLSTESTn1 00:21:13.982 21:11:41 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:14.244 21:11:41 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:14.244 "subsystems": [ 00:21:14.244 { 00:21:14.244 "subsystem": "keyring", 00:21:14.244 "config": [] 00:21:14.244 }, 00:21:14.244 { 00:21:14.244 "subsystem": "iobuf", 00:21:14.244 "config": [ 00:21:14.244 { 00:21:14.244 "method": "iobuf_set_options", 00:21:14.244 "params": { 00:21:14.244 "small_pool_count": 8192, 00:21:14.244 "large_pool_count": 1024, 00:21:14.244 "small_bufsize": 8192, 00:21:14.244 "large_bufsize": 135168 00:21:14.244 } 00:21:14.244 } 00:21:14.244 ] 00:21:14.244 }, 00:21:14.244 { 00:21:14.244 "subsystem": "sock", 00:21:14.244 "config": [ 00:21:14.244 { 00:21:14.244 "method": "sock_set_default_impl", 00:21:14.244 "params": { 00:21:14.244 "impl_name": "posix" 00:21:14.244 } 00:21:14.244 }, 00:21:14.244 { 00:21:14.244 "method": "sock_impl_set_options", 00:21:14.244 "params": { 00:21:14.244 "impl_name": "ssl", 00:21:14.244 "recv_buf_size": 4096, 00:21:14.244 "send_buf_size": 4096, 00:21:14.244 "enable_recv_pipe": true, 00:21:14.244 "enable_quickack": false, 00:21:14.244 "enable_placement_id": 0, 00:21:14.244 "enable_zerocopy_send_server": true, 00:21:14.244 "enable_zerocopy_send_client": false, 00:21:14.244 "zerocopy_threshold": 0, 00:21:14.244 "tls_version": 0, 00:21:14.244 "enable_ktls": false 00:21:14.244 } 00:21:14.244 }, 00:21:14.244 { 00:21:14.244 "method": "sock_impl_set_options", 00:21:14.244 "params": { 00:21:14.244 "impl_name": "posix", 00:21:14.244 "recv_buf_size": 2097152, 00:21:14.244 "send_buf_size": 2097152, 00:21:14.244 "enable_recv_pipe": true, 00:21:14.244 "enable_quickack": false, 00:21:14.244 "enable_placement_id": 0, 00:21:14.244 "enable_zerocopy_send_server": true, 00:21:14.244 "enable_zerocopy_send_client": false, 00:21:14.244 "zerocopy_threshold": 0, 00:21:14.244 "tls_version": 0, 00:21:14.244 "enable_ktls": false 00:21:14.244 } 00:21:14.244 } 00:21:14.244 ] 00:21:14.244 }, 00:21:14.244 { 00:21:14.244 "subsystem": "vmd", 00:21:14.244 "config": [] 00:21:14.244 }, 00:21:14.244 { 00:21:14.244 "subsystem": "accel", 00:21:14.244 "config": [ 00:21:14.244 { 00:21:14.244 "method": "accel_set_options", 00:21:14.244 "params": { 00:21:14.244 "small_cache_size": 128, 00:21:14.244 "large_cache_size": 16, 00:21:14.244 "task_count": 2048, 00:21:14.244 "sequence_count": 2048, 00:21:14.244 "buf_count": 2048 00:21:14.244 } 00:21:14.244 } 00:21:14.244 ] 00:21:14.244 }, 00:21:14.244 { 00:21:14.244 "subsystem": "bdev", 00:21:14.244 "config": [ 00:21:14.244 { 00:21:14.244 "method": "bdev_set_options", 00:21:14.244 "params": { 00:21:14.244 "bdev_io_pool_size": 65535, 00:21:14.244 "bdev_io_cache_size": 256, 00:21:14.244 "bdev_auto_examine": true, 00:21:14.244 "iobuf_small_cache_size": 128, 00:21:14.244 "iobuf_large_cache_size": 16 00:21:14.244 } 00:21:14.244 }, 00:21:14.244 { 00:21:14.244 "method": "bdev_raid_set_options", 00:21:14.244 "params": { 00:21:14.244 "process_window_size_kb": 1024 00:21:14.244 } 00:21:14.244 }, 00:21:14.244 { 00:21:14.244 "method": "bdev_iscsi_set_options", 00:21:14.244 "params": { 00:21:14.244 "timeout_sec": 30 00:21:14.244 } 00:21:14.244 }, 00:21:14.244 { 00:21:14.244 "method": "bdev_nvme_set_options", 00:21:14.244 "params": { 00:21:14.244 "action_on_timeout": "none", 00:21:14.244 "timeout_us": 0, 00:21:14.244 "timeout_admin_us": 0, 00:21:14.244 "keep_alive_timeout_ms": 10000, 00:21:14.244 "arbitration_burst": 0, 00:21:14.244 "low_priority_weight": 0, 00:21:14.244 "medium_priority_weight": 0, 00:21:14.244 "high_priority_weight": 0, 00:21:14.244 "nvme_adminq_poll_period_us": 10000, 00:21:14.244 "nvme_ioq_poll_period_us": 0, 00:21:14.244 "io_queue_requests": 0, 00:21:14.244 "delay_cmd_submit": true, 00:21:14.244 "transport_retry_count": 4, 00:21:14.244 "bdev_retry_count": 3, 00:21:14.244 "transport_ack_timeout": 0, 00:21:14.244 "ctrlr_loss_timeout_sec": 0, 00:21:14.244 "reconnect_delay_sec": 0, 00:21:14.244 "fast_io_fail_timeout_sec": 0, 00:21:14.244 "disable_auto_failback": false, 00:21:14.244 "generate_uuids": false, 00:21:14.244 "transport_tos": 0, 00:21:14.244 "nvme_error_stat": false, 00:21:14.244 "rdma_srq_size": 0, 00:21:14.244 "io_path_stat": false, 00:21:14.244 "allow_accel_sequence": false, 00:21:14.244 "rdma_max_cq_size": 0, 00:21:14.244 "rdma_cm_event_timeout_ms": 0, 00:21:14.244 "dhchap_digests": [ 00:21:14.244 "sha256", 00:21:14.244 "sha384", 00:21:14.244 "sha512" 00:21:14.244 ], 00:21:14.244 "dhchap_dhgroups": [ 00:21:14.244 "null", 00:21:14.244 "ffdhe2048", 00:21:14.244 "ffdhe3072", 00:21:14.244 "ffdhe4096", 00:21:14.244 "ffdhe6144", 00:21:14.244 "ffdhe8192" 00:21:14.244 ] 00:21:14.244 } 00:21:14.244 }, 00:21:14.244 { 00:21:14.244 "method": "bdev_nvme_set_hotplug", 00:21:14.244 "params": { 00:21:14.244 "period_us": 100000, 00:21:14.244 "enable": false 00:21:14.244 } 00:21:14.244 }, 00:21:14.244 { 00:21:14.244 "method": "bdev_malloc_create", 00:21:14.244 "params": { 00:21:14.244 "name": "malloc0", 00:21:14.244 "num_blocks": 8192, 00:21:14.244 "block_size": 4096, 00:21:14.245 "physical_block_size": 4096, 00:21:14.245 "uuid": "e32768f4-2bed-4542-a370-ef717593c216", 00:21:14.245 "optimal_io_boundary": 0 00:21:14.245 } 00:21:14.245 }, 00:21:14.245 { 00:21:14.245 "method": "bdev_wait_for_examine" 00:21:14.245 } 00:21:14.245 ] 00:21:14.245 }, 00:21:14.245 { 00:21:14.245 "subsystem": "nbd", 00:21:14.245 "config": [] 00:21:14.245 }, 00:21:14.245 { 00:21:14.245 "subsystem": "scheduler", 00:21:14.245 "config": [ 00:21:14.245 { 00:21:14.245 "method": "framework_set_scheduler", 00:21:14.245 "params": { 00:21:14.245 "name": "static" 00:21:14.245 } 00:21:14.245 } 00:21:14.245 ] 00:21:14.245 }, 00:21:14.245 { 00:21:14.245 "subsystem": "nvmf", 00:21:14.245 "config": [ 00:21:14.245 { 00:21:14.245 "method": "nvmf_set_config", 00:21:14.245 "params": { 00:21:14.245 "discovery_filter": "match_any", 00:21:14.245 "admin_cmd_passthru": { 00:21:14.245 "identify_ctrlr": false 00:21:14.245 } 00:21:14.245 } 00:21:14.245 }, 00:21:14.245 { 00:21:14.245 "method": "nvmf_set_max_subsystems", 00:21:14.245 "params": { 00:21:14.245 "max_subsystems": 1024 00:21:14.245 } 00:21:14.245 }, 00:21:14.245 { 00:21:14.245 "method": "nvmf_set_crdt", 00:21:14.245 "params": { 00:21:14.245 "crdt1": 0, 00:21:14.245 "crdt2": 0, 00:21:14.245 "crdt3": 0 00:21:14.245 } 00:21:14.245 }, 00:21:14.245 { 00:21:14.245 "method": "nvmf_create_transport", 00:21:14.245 "params": { 00:21:14.245 "trtype": "TCP", 00:21:14.245 "max_queue_depth": 128, 00:21:14.245 "max_io_qpairs_per_ctrlr": 127, 00:21:14.245 "in_capsule_data_size": 4096, 00:21:14.245 "max_io_size": 131072, 00:21:14.245 "io_unit_size": 131072, 00:21:14.245 "max_aq_depth": 128, 00:21:14.245 "num_shared_buffers": 511, 00:21:14.245 "buf_cache_size": 4294967295, 00:21:14.245 "dif_insert_or_strip": false, 00:21:14.245 "zcopy": false, 00:21:14.245 "c2h_success": false, 00:21:14.245 "sock_priority": 0, 00:21:14.245 "abort_timeout_sec": 1, 00:21:14.245 "ack_timeout": 0, 00:21:14.245 "data_wr_pool_size": 0 00:21:14.245 } 00:21:14.245 }, 00:21:14.245 { 00:21:14.245 "method": "nvmf_create_subsystem", 00:21:14.245 "params": { 00:21:14.245 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.245 "allow_any_host": false, 00:21:14.245 "serial_number": "SPDK00000000000001", 00:21:14.245 "model_number": "SPDK bdev Controller", 00:21:14.245 "max_namespaces": 10, 00:21:14.245 "min_cntlid": 1, 00:21:14.245 "max_cntlid": 65519, 00:21:14.245 "ana_reporting": false 00:21:14.245 } 00:21:14.245 }, 00:21:14.245 { 00:21:14.245 "method": "nvmf_subsystem_add_host", 00:21:14.245 "params": { 00:21:14.245 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.245 "host": "nqn.2016-06.io.spdk:host1", 00:21:14.245 "psk": "/tmp/tmp.0YxTVOOO5N" 00:21:14.245 } 00:21:14.245 }, 00:21:14.245 { 00:21:14.245 "method": "nvmf_subsystem_add_ns", 00:21:14.245 "params": { 00:21:14.245 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.245 "namespace": { 00:21:14.245 "nsid": 1, 00:21:14.245 "bdev_name": "malloc0", 00:21:14.245 "nguid": "E32768F42BED4542A370EF717593C216", 00:21:14.245 "uuid": "e32768f4-2bed-4542-a370-ef717593c216", 00:21:14.245 "no_auto_visible": false 00:21:14.245 } 00:21:14.245 } 00:21:14.245 }, 00:21:14.245 { 00:21:14.245 "method": "nvmf_subsystem_add_listener", 00:21:14.245 "params": { 00:21:14.245 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.245 "listen_address": { 00:21:14.245 "trtype": "TCP", 00:21:14.245 "adrfam": "IPv4", 00:21:14.245 "traddr": "10.0.0.2", 00:21:14.245 "trsvcid": "4420" 00:21:14.245 }, 00:21:14.245 "secure_channel": true 00:21:14.245 } 00:21:14.245 } 00:21:14.245 ] 00:21:14.245 } 00:21:14.245 ] 00:21:14.245 }' 00:21:14.245 21:11:41 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:14.506 21:11:41 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:14.506 "subsystems": [ 00:21:14.506 { 00:21:14.506 "subsystem": "keyring", 00:21:14.506 "config": [] 00:21:14.506 }, 00:21:14.506 { 00:21:14.506 "subsystem": "iobuf", 00:21:14.506 "config": [ 00:21:14.506 { 00:21:14.506 "method": "iobuf_set_options", 00:21:14.506 "params": { 00:21:14.506 "small_pool_count": 8192, 00:21:14.506 "large_pool_count": 1024, 00:21:14.506 "small_bufsize": 8192, 00:21:14.506 "large_bufsize": 135168 00:21:14.506 } 00:21:14.506 } 00:21:14.506 ] 00:21:14.506 }, 00:21:14.506 { 00:21:14.506 "subsystem": "sock", 00:21:14.506 "config": [ 00:21:14.506 { 00:21:14.506 "method": "sock_set_default_impl", 00:21:14.506 "params": { 00:21:14.506 "impl_name": "posix" 00:21:14.506 } 00:21:14.506 }, 00:21:14.506 { 00:21:14.506 "method": "sock_impl_set_options", 00:21:14.506 "params": { 00:21:14.506 "impl_name": "ssl", 00:21:14.506 "recv_buf_size": 4096, 00:21:14.506 "send_buf_size": 4096, 00:21:14.506 "enable_recv_pipe": true, 00:21:14.506 "enable_quickack": false, 00:21:14.506 "enable_placement_id": 0, 00:21:14.506 "enable_zerocopy_send_server": true, 00:21:14.506 "enable_zerocopy_send_client": false, 00:21:14.506 "zerocopy_threshold": 0, 00:21:14.506 "tls_version": 0, 00:21:14.506 "enable_ktls": false 00:21:14.506 } 00:21:14.506 }, 00:21:14.506 { 00:21:14.506 "method": "sock_impl_set_options", 00:21:14.506 "params": { 00:21:14.506 "impl_name": "posix", 00:21:14.506 "recv_buf_size": 2097152, 00:21:14.506 "send_buf_size": 2097152, 00:21:14.506 "enable_recv_pipe": true, 00:21:14.506 "enable_quickack": false, 00:21:14.506 "enable_placement_id": 0, 00:21:14.506 "enable_zerocopy_send_server": true, 00:21:14.506 "enable_zerocopy_send_client": false, 00:21:14.506 "zerocopy_threshold": 0, 00:21:14.506 "tls_version": 0, 00:21:14.506 "enable_ktls": false 00:21:14.506 } 00:21:14.506 } 00:21:14.506 ] 00:21:14.506 }, 00:21:14.506 { 00:21:14.506 "subsystem": "vmd", 00:21:14.506 "config": [] 00:21:14.506 }, 00:21:14.506 { 00:21:14.506 "subsystem": "accel", 00:21:14.506 "config": [ 00:21:14.506 { 00:21:14.506 "method": "accel_set_options", 00:21:14.506 "params": { 00:21:14.507 "small_cache_size": 128, 00:21:14.507 "large_cache_size": 16, 00:21:14.507 "task_count": 2048, 00:21:14.507 "sequence_count": 2048, 00:21:14.507 "buf_count": 2048 00:21:14.507 } 00:21:14.507 } 00:21:14.507 ] 00:21:14.507 }, 00:21:14.507 { 00:21:14.507 "subsystem": "bdev", 00:21:14.507 "config": [ 00:21:14.507 { 00:21:14.507 "method": "bdev_set_options", 00:21:14.507 "params": { 00:21:14.507 "bdev_io_pool_size": 65535, 00:21:14.507 "bdev_io_cache_size": 256, 00:21:14.507 "bdev_auto_examine": true, 00:21:14.507 "iobuf_small_cache_size": 128, 00:21:14.507 "iobuf_large_cache_size": 16 00:21:14.507 } 00:21:14.507 }, 00:21:14.507 { 00:21:14.507 "method": "bdev_raid_set_options", 00:21:14.507 "params": { 00:21:14.507 "process_window_size_kb": 1024 00:21:14.507 } 00:21:14.507 }, 00:21:14.507 { 00:21:14.507 "method": "bdev_iscsi_set_options", 00:21:14.507 "params": { 00:21:14.507 "timeout_sec": 30 00:21:14.507 } 00:21:14.507 }, 00:21:14.507 { 00:21:14.507 "method": "bdev_nvme_set_options", 00:21:14.507 "params": { 00:21:14.507 "action_on_timeout": "none", 00:21:14.507 "timeout_us": 0, 00:21:14.507 "timeout_admin_us": 0, 00:21:14.507 "keep_alive_timeout_ms": 10000, 00:21:14.507 "arbitration_burst": 0, 00:21:14.507 "low_priority_weight": 0, 00:21:14.507 "medium_priority_weight": 0, 00:21:14.507 "high_priority_weight": 0, 00:21:14.507 "nvme_adminq_poll_period_us": 10000, 00:21:14.507 "nvme_ioq_poll_period_us": 0, 00:21:14.507 "io_queue_requests": 512, 00:21:14.507 "delay_cmd_submit": true, 00:21:14.507 "transport_retry_count": 4, 00:21:14.507 "bdev_retry_count": 3, 00:21:14.507 "transport_ack_timeout": 0, 00:21:14.507 "ctrlr_loss_timeout_sec": 0, 00:21:14.507 "reconnect_delay_sec": 0, 00:21:14.507 "fast_io_fail_timeout_sec": 0, 00:21:14.507 "disable_auto_failback": false, 00:21:14.507 "generate_uuids": false, 00:21:14.507 "transport_tos": 0, 00:21:14.507 "nvme_error_stat": false, 00:21:14.507 "rdma_srq_size": 0, 00:21:14.507 "io_path_stat": false, 00:21:14.507 "allow_accel_sequence": false, 00:21:14.507 "rdma_max_cq_size": 0, 00:21:14.507 "rdma_cm_event_timeout_ms": 0, 00:21:14.507 "dhchap_digests": [ 00:21:14.507 "sha256", 00:21:14.507 "sha384", 00:21:14.507 "sha512" 00:21:14.507 ], 00:21:14.507 "dhchap_dhgroups": [ 00:21:14.507 "null", 00:21:14.507 "ffdhe2048", 00:21:14.507 "ffdhe3072", 00:21:14.507 "ffdhe4096", 00:21:14.507 "ffdhe6144", 00:21:14.507 "ffdhe8192" 00:21:14.507 ] 00:21:14.507 } 00:21:14.507 }, 00:21:14.507 { 00:21:14.507 "method": "bdev_nvme_attach_controller", 00:21:14.507 "params": { 00:21:14.507 "name": "TLSTEST", 00:21:14.507 "trtype": "TCP", 00:21:14.507 "adrfam": "IPv4", 00:21:14.507 "traddr": "10.0.0.2", 00:21:14.507 "trsvcid": "4420", 00:21:14.507 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.507 "prchk_reftag": false, 00:21:14.507 "prchk_guard": false, 00:21:14.507 "ctrlr_loss_timeout_sec": 0, 00:21:14.507 "reconnect_delay_sec": 0, 00:21:14.507 "fast_io_fail_timeout_sec": 0, 00:21:14.507 "psk": "/tmp/tmp.0YxTVOOO5N", 00:21:14.507 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:14.507 "hdgst": false, 00:21:14.507 "ddgst": false 00:21:14.507 } 00:21:14.507 }, 00:21:14.507 { 00:21:14.507 "method": "bdev_nvme_set_hotplug", 00:21:14.507 "params": { 00:21:14.507 "period_us": 100000, 00:21:14.507 "enable": false 00:21:14.507 } 00:21:14.507 }, 00:21:14.507 { 00:21:14.507 "method": "bdev_wait_for_examine" 00:21:14.507 } 00:21:14.507 ] 00:21:14.507 }, 00:21:14.507 { 00:21:14.507 "subsystem": "nbd", 00:21:14.507 "config": [] 00:21:14.507 } 00:21:14.507 ] 00:21:14.507 }' 00:21:14.507 21:11:41 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2004991 00:21:14.507 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2004991 ']' 00:21:14.507 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2004991 00:21:14.507 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:14.507 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:14.507 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2004991 00:21:14.507 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:14.507 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:14.507 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2004991' 00:21:14.507 killing process with pid 2004991 00:21:14.507 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2004991 00:21:14.507 Received shutdown signal, test time was about 10.000000 seconds 00:21:14.507 00:21:14.507 Latency(us) 00:21:14.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.507 =================================================================================================================== 00:21:14.507 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:14.507 [2024-07-15 21:11:41.708179] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:14.507 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2004991 00:21:14.769 21:11:41 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2004630 00:21:14.769 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2004630 ']' 00:21:14.769 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2004630 00:21:14.769 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:14.769 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:14.769 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2004630 00:21:14.769 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:14.769 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:14.769 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2004630' 00:21:14.769 killing process with pid 2004630 00:21:14.769 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2004630 00:21:14.769 [2024-07-15 21:11:41.878009] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:14.769 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2004630 00:21:14.769 21:11:41 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:14.769 21:11:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:14.769 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:14.769 21:11:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.769 21:11:41 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:14.769 "subsystems": [ 00:21:14.769 { 00:21:14.769 "subsystem": "keyring", 00:21:14.769 "config": [] 00:21:14.769 }, 00:21:14.769 { 00:21:14.769 "subsystem": "iobuf", 00:21:14.769 "config": [ 00:21:14.769 { 00:21:14.769 "method": "iobuf_set_options", 00:21:14.769 "params": { 00:21:14.769 "small_pool_count": 8192, 00:21:14.769 "large_pool_count": 1024, 00:21:14.769 "small_bufsize": 8192, 00:21:14.769 "large_bufsize": 135168 00:21:14.769 } 00:21:14.769 } 00:21:14.769 ] 00:21:14.769 }, 00:21:14.769 { 00:21:14.769 "subsystem": "sock", 00:21:14.769 "config": [ 00:21:14.769 { 00:21:14.769 "method": "sock_set_default_impl", 00:21:14.769 "params": { 00:21:14.769 "impl_name": "posix" 00:21:14.769 } 00:21:14.769 }, 00:21:14.769 { 00:21:14.769 "method": "sock_impl_set_options", 00:21:14.769 "params": { 00:21:14.769 "impl_name": "ssl", 00:21:14.769 "recv_buf_size": 4096, 00:21:14.769 "send_buf_size": 4096, 00:21:14.769 "enable_recv_pipe": true, 00:21:14.769 "enable_quickack": false, 00:21:14.769 "enable_placement_id": 0, 00:21:14.769 "enable_zerocopy_send_server": true, 00:21:14.769 "enable_zerocopy_send_client": false, 00:21:14.769 "zerocopy_threshold": 0, 00:21:14.769 "tls_version": 0, 00:21:14.769 "enable_ktls": false 00:21:14.769 } 00:21:14.769 }, 00:21:14.769 { 00:21:14.769 "method": "sock_impl_set_options", 00:21:14.769 "params": { 00:21:14.769 "impl_name": "posix", 00:21:14.769 "recv_buf_size": 2097152, 00:21:14.769 "send_buf_size": 2097152, 00:21:14.769 "enable_recv_pipe": true, 00:21:14.769 "enable_quickack": false, 00:21:14.769 "enable_placement_id": 0, 00:21:14.769 "enable_zerocopy_send_server": true, 00:21:14.769 "enable_zerocopy_send_client": false, 00:21:14.769 "zerocopy_threshold": 0, 00:21:14.769 "tls_version": 0, 00:21:14.769 "enable_ktls": false 00:21:14.769 } 00:21:14.769 } 00:21:14.769 ] 00:21:14.769 }, 00:21:14.769 { 00:21:14.769 "subsystem": "vmd", 00:21:14.769 "config": [] 00:21:14.769 }, 00:21:14.769 { 00:21:14.769 "subsystem": "accel", 00:21:14.769 "config": [ 00:21:14.769 { 00:21:14.769 "method": "accel_set_options", 00:21:14.769 "params": { 00:21:14.769 "small_cache_size": 128, 00:21:14.769 "large_cache_size": 16, 00:21:14.769 "task_count": 2048, 00:21:14.769 "sequence_count": 2048, 00:21:14.769 "buf_count": 2048 00:21:14.769 } 00:21:14.769 } 00:21:14.769 ] 00:21:14.769 }, 00:21:14.769 { 00:21:14.769 "subsystem": "bdev", 00:21:14.769 "config": [ 00:21:14.769 { 00:21:14.769 "method": "bdev_set_options", 00:21:14.769 "params": { 00:21:14.769 "bdev_io_pool_size": 65535, 00:21:14.769 "bdev_io_cache_size": 256, 00:21:14.769 "bdev_auto_examine": true, 00:21:14.769 "iobuf_small_cache_size": 128, 00:21:14.769 "iobuf_large_cache_size": 16 00:21:14.769 } 00:21:14.769 }, 00:21:14.769 { 00:21:14.769 "method": "bdev_raid_set_options", 00:21:14.769 "params": { 00:21:14.769 "process_window_size_kb": 1024 00:21:14.769 } 00:21:14.769 }, 00:21:14.769 { 00:21:14.769 "method": "bdev_iscsi_set_options", 00:21:14.769 "params": { 00:21:14.769 "timeout_sec": 30 00:21:14.769 } 00:21:14.769 }, 00:21:14.769 { 00:21:14.769 "method": "bdev_nvme_set_options", 00:21:14.769 "params": { 00:21:14.769 "action_on_timeout": "none", 00:21:14.769 "timeout_us": 0, 00:21:14.770 "timeout_admin_us": 0, 00:21:14.770 "keep_alive_timeout_ms": 10000, 00:21:14.770 "arbitration_burst": 0, 00:21:14.770 "low_priority_weight": 0, 00:21:14.770 "medium_priority_weight": 0, 00:21:14.770 "high_priority_weight": 0, 00:21:14.770 "nvme_adminq_poll_period_us": 10000, 00:21:14.770 "nvme_ioq_poll_period_us": 0, 00:21:14.770 "io_queue_requests": 0, 00:21:14.770 "delay_cmd_submit": true, 00:21:14.770 "transport_retry_count": 4, 00:21:14.770 "bdev_retry_count": 3, 00:21:14.770 "transport_ack_timeout": 0, 00:21:14.770 "ctrlr_loss_timeout_sec": 0, 00:21:14.770 "reconnect_delay_sec": 0, 00:21:14.770 "fast_io_fail_timeout_sec": 0, 00:21:14.770 "disable_auto_failback": false, 00:21:14.770 "generate_uuids": false, 00:21:14.770 "transport_tos": 0, 00:21:14.770 "nvme_error_stat": false, 00:21:14.770 "rdma_srq_size": 0, 00:21:14.770 "io_path_stat": false, 00:21:14.770 "allow_accel_sequence": false, 00:21:14.770 "rdma_max_cq_size": 0, 00:21:14.770 "rdma_cm_event_timeout_ms": 0, 00:21:14.770 "dhchap_digests": [ 00:21:14.770 "sha256", 00:21:14.770 "sha384", 00:21:14.770 "sha512" 00:21:14.770 ], 00:21:14.770 "dhchap_dhgroups": [ 00:21:14.770 "null", 00:21:14.770 "ffdhe2048", 00:21:14.770 "ffdhe3072", 00:21:14.770 "ffdhe4096", 00:21:14.770 "ffdhe6144", 00:21:14.770 "ffdhe8192" 00:21:14.770 ] 00:21:14.770 } 00:21:14.770 }, 00:21:14.770 { 00:21:14.770 "method": "bdev_nvme_set_hotplug", 00:21:14.770 "params": { 00:21:14.770 "period_us": 100000, 00:21:14.770 "enable": false 00:21:14.770 } 00:21:14.770 }, 00:21:14.770 { 00:21:14.770 "method": "bdev_malloc_create", 00:21:14.770 "params": { 00:21:14.770 "name": "malloc0", 00:21:14.770 "num_blocks": 8192, 00:21:14.770 "block_size": 4096, 00:21:14.770 "physical_block_size": 4096, 00:21:14.770 "uuid": "e32768f4-2bed-4542-a370-ef717593c216", 00:21:14.770 "optimal_io_boundary": 0 00:21:14.770 } 00:21:14.770 }, 00:21:14.770 { 00:21:14.770 "method": "bdev_wait_for_examine" 00:21:14.770 } 00:21:14.770 ] 00:21:14.770 }, 00:21:14.770 { 00:21:14.770 "subsystem": "nbd", 00:21:14.770 "config": [] 00:21:14.770 }, 00:21:14.770 { 00:21:14.770 "subsystem": "scheduler", 00:21:14.770 "config": [ 00:21:14.770 { 00:21:14.770 "method": "framework_set_scheduler", 00:21:14.770 "params": { 00:21:14.770 "name": "static" 00:21:14.770 } 00:21:14.770 } 00:21:14.770 ] 00:21:14.770 }, 00:21:14.770 { 00:21:14.770 "subsystem": "nvmf", 00:21:14.770 "config": [ 00:21:14.770 { 00:21:14.770 "method": "nvmf_set_config", 00:21:14.770 "params": { 00:21:14.770 "discovery_filter": "match_any", 00:21:14.770 "admin_cmd_passthru": { 00:21:14.770 "identify_ctrlr": false 00:21:14.770 } 00:21:14.770 } 00:21:14.770 }, 00:21:14.770 { 00:21:14.770 "method": "nvmf_set_max_subsystems", 00:21:14.770 "params": { 00:21:14.770 "max_subsystems": 1024 00:21:14.770 } 00:21:14.770 }, 00:21:14.770 { 00:21:14.770 "method": "nvmf_set_crdt", 00:21:14.770 "params": { 00:21:14.770 "crdt1": 0, 00:21:14.770 "crdt2": 0, 00:21:14.770 "crdt3": 0 00:21:14.770 } 00:21:14.770 }, 00:21:14.770 { 00:21:14.770 "method": "nvmf_create_transport", 00:21:14.770 "params": { 00:21:14.770 "trtype": "TCP", 00:21:14.770 "max_queue_depth": 128, 00:21:14.770 "max_io_qpairs_per_ctrlr": 127, 00:21:14.770 "in_capsule_data_size": 4096, 00:21:14.770 "max_io_size": 131072, 00:21:14.770 "io_unit_size": 131072, 00:21:14.770 "max_aq_depth": 128, 00:21:14.770 "num_shared_buffers": 511, 00:21:14.770 "buf_cache_size": 4294967295, 00:21:14.770 "dif_insert_or_strip": false, 00:21:14.770 "zcopy": false, 00:21:14.770 "c2h_success": false, 00:21:14.770 "sock_priority": 0, 00:21:14.770 "abort_timeout_sec": 1, 00:21:14.770 "ack_timeout": 0, 00:21:14.770 "data_wr_pool_size": 0 00:21:14.770 } 00:21:14.770 }, 00:21:14.770 { 00:21:14.770 "method": "nvmf_create_subsystem", 00:21:14.770 "params": { 00:21:14.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.770 "allow_any_host": false, 00:21:14.770 "serial_number": "SPDK00000000000001", 00:21:14.770 "model_number": "SPDK bdev Controller", 00:21:14.770 "max_namespaces": 10, 00:21:14.770 "min_cntlid": 1, 00:21:14.770 "max_cntlid": 65519, 00:21:14.770 "ana_reporting": false 00:21:14.770 } 00:21:14.770 }, 00:21:14.770 { 00:21:14.770 "method": "nvmf_subsystem_add_host", 00:21:14.770 "params": { 00:21:14.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.770 "host": "nqn.2016-06.io.spdk:host1", 00:21:14.770 "psk": "/tmp/tmp.0YxTVOOO5N" 00:21:14.770 } 00:21:14.770 }, 00:21:14.770 { 00:21:14.770 "method": "nvmf_subsystem_add_ns", 00:21:14.770 "params": { 00:21:14.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.770 "namespace": { 00:21:14.770 "nsid": 1, 00:21:14.770 "bdev_name": "malloc0", 00:21:14.770 "nguid": "E32768F42BED4542A370EF717593C216", 00:21:14.770 "uuid": "e32768f4-2bed-4542-a370-ef717593c216", 00:21:14.770 "no_auto_visible": false 00:21:14.770 } 00:21:14.770 } 00:21:14.770 }, 00:21:14.770 { 00:21:14.770 "method": "nvmf_subsystem_add_listener", 00:21:14.770 "params": { 00:21:14.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.770 "listen_address": { 00:21:14.770 "trtype": "TCP", 00:21:14.770 "adrfam": "IPv4", 00:21:14.770 "traddr": "10.0.0.2", 00:21:14.770 "trsvcid": "4420" 00:21:14.770 }, 00:21:14.770 "secure_channel": true 00:21:14.770 } 00:21:14.770 } 00:21:14.770 ] 00:21:14.770 } 00:21:14.770 ] 00:21:14.770 }' 00:21:14.770 21:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2005336 00:21:14.770 21:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2005336 00:21:14.770 21:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:14.770 21:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2005336 ']' 00:21:14.770 21:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.770 21:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:14.770 21:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.771 21:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:14.771 21:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.771 [2024-07-15 21:11:42.057388] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:21:14.771 [2024-07-15 21:11:42.057445] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.038 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.038 [2024-07-15 21:11:42.143145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.038 [2024-07-15 21:11:42.197691] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.038 [2024-07-15 21:11:42.197723] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.038 [2024-07-15 21:11:42.197728] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.038 [2024-07-15 21:11:42.197733] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.038 [2024-07-15 21:11:42.197737] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.038 [2024-07-15 21:11:42.197780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.317 [2024-07-15 21:11:42.381465] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.317 [2024-07-15 21:11:42.397439] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:15.317 [2024-07-15 21:11:42.413488] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:15.318 [2024-07-15 21:11:42.426555] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.578 21:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:15.578 21:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:15.578 21:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:15.578 21:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:15.578 21:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.578 21:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.578 21:11:42 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2005516 00:21:15.578 21:11:42 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2005516 /var/tmp/bdevperf.sock 00:21:15.578 21:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2005516 ']' 00:21:15.578 21:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.578 21:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:15.578 21:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.578 21:11:42 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:15.578 21:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:15.578 21:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.578 21:11:42 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:15.578 "subsystems": [ 00:21:15.578 { 00:21:15.578 "subsystem": "keyring", 00:21:15.578 "config": [] 00:21:15.578 }, 00:21:15.578 { 00:21:15.578 "subsystem": "iobuf", 00:21:15.578 "config": [ 00:21:15.578 { 00:21:15.578 "method": "iobuf_set_options", 00:21:15.578 "params": { 00:21:15.578 "small_pool_count": 8192, 00:21:15.578 "large_pool_count": 1024, 00:21:15.578 "small_bufsize": 8192, 00:21:15.578 "large_bufsize": 135168 00:21:15.578 } 00:21:15.578 } 00:21:15.578 ] 00:21:15.578 }, 00:21:15.578 { 00:21:15.578 "subsystem": "sock", 00:21:15.578 "config": [ 00:21:15.578 { 00:21:15.578 "method": "sock_set_default_impl", 00:21:15.578 "params": { 00:21:15.578 "impl_name": "posix" 00:21:15.578 } 00:21:15.578 }, 00:21:15.578 { 00:21:15.578 "method": "sock_impl_set_options", 00:21:15.578 "params": { 00:21:15.578 "impl_name": "ssl", 00:21:15.578 "recv_buf_size": 4096, 00:21:15.578 "send_buf_size": 4096, 00:21:15.578 "enable_recv_pipe": true, 00:21:15.578 "enable_quickack": false, 00:21:15.578 "enable_placement_id": 0, 00:21:15.578 "enable_zerocopy_send_server": true, 00:21:15.578 "enable_zerocopy_send_client": false, 00:21:15.578 "zerocopy_threshold": 0, 00:21:15.578 "tls_version": 0, 00:21:15.578 "enable_ktls": false 00:21:15.578 } 00:21:15.578 }, 00:21:15.578 { 00:21:15.578 "method": "sock_impl_set_options", 00:21:15.578 "params": { 00:21:15.578 "impl_name": "posix", 00:21:15.578 "recv_buf_size": 2097152, 00:21:15.578 "send_buf_size": 2097152, 00:21:15.578 "enable_recv_pipe": true, 00:21:15.578 "enable_quickack": false, 00:21:15.578 "enable_placement_id": 0, 00:21:15.578 "enable_zerocopy_send_server": true, 00:21:15.578 "enable_zerocopy_send_client": false, 00:21:15.578 "zerocopy_threshold": 0, 00:21:15.578 "tls_version": 0, 00:21:15.578 "enable_ktls": false 00:21:15.578 } 00:21:15.578 } 00:21:15.578 ] 00:21:15.578 }, 00:21:15.578 { 00:21:15.578 "subsystem": "vmd", 00:21:15.578 "config": [] 00:21:15.578 }, 00:21:15.578 { 00:21:15.578 "subsystem": "accel", 00:21:15.578 "config": [ 00:21:15.578 { 00:21:15.578 "method": "accel_set_options", 00:21:15.578 "params": { 00:21:15.578 "small_cache_size": 128, 00:21:15.578 "large_cache_size": 16, 00:21:15.578 "task_count": 2048, 00:21:15.578 "sequence_count": 2048, 00:21:15.578 "buf_count": 2048 00:21:15.578 } 00:21:15.578 } 00:21:15.578 ] 00:21:15.578 }, 00:21:15.578 { 00:21:15.578 "subsystem": "bdev", 00:21:15.578 "config": [ 00:21:15.578 { 00:21:15.578 "method": "bdev_set_options", 00:21:15.578 "params": { 00:21:15.578 "bdev_io_pool_size": 65535, 00:21:15.578 "bdev_io_cache_size": 256, 00:21:15.578 "bdev_auto_examine": true, 00:21:15.578 "iobuf_small_cache_size": 128, 00:21:15.578 "iobuf_large_cache_size": 16 00:21:15.578 } 00:21:15.578 }, 00:21:15.578 { 00:21:15.578 "method": "bdev_raid_set_options", 00:21:15.578 "params": { 00:21:15.578 "process_window_size_kb": 1024 00:21:15.578 } 00:21:15.578 }, 00:21:15.578 { 00:21:15.578 "method": "bdev_iscsi_set_options", 00:21:15.578 "params": { 00:21:15.578 "timeout_sec": 30 00:21:15.578 } 00:21:15.578 }, 00:21:15.578 { 00:21:15.578 "method": "bdev_nvme_set_options", 00:21:15.578 "params": { 00:21:15.578 "action_on_timeout": "none", 00:21:15.578 "timeout_us": 0, 00:21:15.578 "timeout_admin_us": 0, 00:21:15.578 "keep_alive_timeout_ms": 10000, 00:21:15.578 "arbitration_burst": 0, 00:21:15.578 "low_priority_weight": 0, 00:21:15.578 "medium_priority_weight": 0, 00:21:15.578 "high_priority_weight": 0, 00:21:15.578 "nvme_adminq_poll_period_us": 10000, 00:21:15.578 "nvme_ioq_poll_period_us": 0, 00:21:15.578 "io_queue_requests": 512, 00:21:15.578 "delay_cmd_submit": true, 00:21:15.578 "transport_retry_count": 4, 00:21:15.578 "bdev_retry_count": 3, 00:21:15.578 "transport_ack_timeout": 0, 00:21:15.579 "ctrlr_loss_timeout_sec": 0, 00:21:15.579 "reconnect_delay_sec": 0, 00:21:15.579 "fast_io_fail_timeout_sec": 0, 00:21:15.579 "disable_auto_failback": false, 00:21:15.579 "generate_uuids": false, 00:21:15.579 "transport_tos": 0, 00:21:15.579 "nvme_error_stat": false, 00:21:15.579 "rdma_srq_size": 0, 00:21:15.579 "io_path_stat": false, 00:21:15.579 "allow_accel_sequence": false, 00:21:15.579 "rdma_max_cq_size": 0, 00:21:15.579 "rdma_cm_event_timeout_ms": 0, 00:21:15.579 "dhchap_digests": [ 00:21:15.579 "sha256", 00:21:15.579 "sha384", 00:21:15.579 "sha512" 00:21:15.579 ], 00:21:15.579 "dhchap_dhgroups": [ 00:21:15.579 "null", 00:21:15.579 "ffdhe2048", 00:21:15.579 "ffdhe3072", 00:21:15.579 "ffdhe4096", 00:21:15.579 "ffdhe6144", 00:21:15.579 "ffdhe8192" 00:21:15.579 ] 00:21:15.579 } 00:21:15.579 }, 00:21:15.579 { 00:21:15.579 "method": "bdev_nvme_attach_controller", 00:21:15.579 "params": { 00:21:15.579 "name": "TLSTEST", 00:21:15.579 "trtype": "TCP", 00:21:15.579 "adrfam": "IPv4", 00:21:15.579 "traddr": "10.0.0.2", 00:21:15.579 "trsvcid": "4420", 00:21:15.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.579 "prchk_reftag": false, 00:21:15.579 "prchk_guard": false, 00:21:15.579 "ctrlr_loss_timeout_sec": 0, 00:21:15.579 "reconnect_delay_sec": 0, 00:21:15.579 "fast_io_fail_timeout_sec": 0, 00:21:15.579 "psk": "/tmp/tmp.0YxTVOOO5N", 00:21:15.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:15.579 "hdgst": false, 00:21:15.579 "ddgst": false 00:21:15.579 } 00:21:15.579 }, 00:21:15.579 { 00:21:15.579 "method": "bdev_nvme_set_hotplug", 00:21:15.579 "params": { 00:21:15.579 "period_us": 100000, 00:21:15.579 "enable": false 00:21:15.579 } 00:21:15.579 }, 00:21:15.579 { 00:21:15.579 "method": "bdev_wait_for_examine" 00:21:15.579 } 00:21:15.579 ] 00:21:15.579 }, 00:21:15.579 { 00:21:15.579 "subsystem": "nbd", 00:21:15.579 "config": [] 00:21:15.579 } 00:21:15.579 ] 00:21:15.579 }' 00:21:15.839 [2024-07-15 21:11:42.901162] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:21:15.839 [2024-07-15 21:11:42.901216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005516 ] 00:21:15.839 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.839 [2024-07-15 21:11:42.955755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.839 [2024-07-15 21:11:43.008390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.099 [2024-07-15 21:11:43.133004] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:16.099 [2024-07-15 21:11:43.133066] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:16.668 21:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:16.668 21:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:16.668 21:11:43 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:16.668 Running I/O for 10 seconds... 00:21:26.751 00:21:26.751 Latency(us) 00:21:26.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.751 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:26.751 Verification LBA range: start 0x0 length 0x2000 00:21:26.751 TLSTESTn1 : 10.02 5284.04 20.64 0.00 0.00 24184.99 6198.61 79080.11 00:21:26.751 =================================================================================================================== 00:21:26.751 Total : 5284.04 20.64 0.00 0.00 24184.99 6198.61 79080.11 00:21:26.751 0 00:21:26.751 21:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:26.751 21:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2005516 00:21:26.751 21:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2005516 ']' 00:21:26.751 21:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2005516 00:21:26.751 21:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:26.751 21:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:26.751 21:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2005516 00:21:26.751 21:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:26.751 21:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:26.751 21:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2005516' 00:21:26.751 killing process with pid 2005516 00:21:26.751 21:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2005516 00:21:26.751 Received shutdown signal, test time was about 10.000000 seconds 00:21:26.751 00:21:26.751 Latency(us) 00:21:26.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.751 =================================================================================================================== 00:21:26.751 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:26.751 [2024-07-15 21:11:53.869945] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:26.751 21:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2005516 00:21:26.751 21:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2005336 00:21:26.752 21:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2005336 ']' 00:21:26.752 21:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2005336 00:21:26.752 21:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:26.752 21:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:26.752 21:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2005336 00:21:26.752 21:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:26.752 21:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:26.752 21:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2005336' 00:21:26.752 killing process with pid 2005336 00:21:26.752 21:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2005336 00:21:26.752 [2024-07-15 21:11:54.034740] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:26.752 21:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2005336 00:21:27.011 21:11:54 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:27.011 21:11:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:27.011 21:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:27.011 21:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.011 21:11:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2007709 00:21:27.011 21:11:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2007709 00:21:27.011 21:11:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:27.011 21:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2007709 ']' 00:21:27.011 21:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.011 21:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:27.011 21:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.011 21:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:27.011 21:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.011 [2024-07-15 21:11:54.212174] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:21:27.011 [2024-07-15 21:11:54.212224] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.011 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.011 [2024-07-15 21:11:54.284713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.271 [2024-07-15 21:11:54.347994] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.271 [2024-07-15 21:11:54.348036] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.271 [2024-07-15 21:11:54.348044] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.271 [2024-07-15 21:11:54.348050] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.271 [2024-07-15 21:11:54.348060] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.271 [2024-07-15 21:11:54.348081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.843 21:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.843 21:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:27.843 21:11:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:27.843 21:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:27.843 21:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.843 21:11:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.843 21:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.0YxTVOOO5N 00:21:27.843 21:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.0YxTVOOO5N 00:21:27.843 21:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:28.103 [2024-07-15 21:11:55.167270] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.103 21:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:28.103 21:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:28.363 [2024-07-15 21:11:55.476014] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:28.363 [2024-07-15 21:11:55.476218] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.363 21:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:28.363 malloc0 00:21:28.363 21:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:28.623 21:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0YxTVOOO5N 00:21:28.884 [2024-07-15 21:11:55.936078] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:28.884 21:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2008075 00:21:28.884 21:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:28.884 21:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:28.884 21:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2008075 /var/tmp/bdevperf.sock 00:21:28.884 21:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2008075 ']' 00:21:28.884 21:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:28.884 21:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:28.884 21:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:28.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:28.884 21:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:28.884 21:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.884 [2024-07-15 21:11:56.008649] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:21:28.884 [2024-07-15 21:11:56.008700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2008075 ] 00:21:28.884 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.884 [2024-07-15 21:11:56.089394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.884 [2024-07-15 21:11:56.143886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.826 21:11:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.827 21:11:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:29.827 21:11:56 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0YxTVOOO5N 00:21:29.827 21:11:56 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:29.827 [2024-07-15 21:11:57.034169] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:29.827 nvme0n1 00:21:30.087 21:11:57 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:30.087 Running I/O for 1 seconds... 00:21:31.028 00:21:31.028 Latency(us) 00:21:31.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.028 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:31.028 Verification LBA range: start 0x0 length 0x2000 00:21:31.028 nvme0n1 : 1.02 3709.88 14.49 0.00 0.00 34213.08 4478.29 35389.44 00:21:31.028 =================================================================================================================== 00:21:31.028 Total : 3709.88 14.49 0.00 0.00 34213.08 4478.29 35389.44 00:21:31.028 0 00:21:31.028 21:11:58 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2008075 00:21:31.028 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2008075 ']' 00:21:31.028 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2008075 00:21:31.028 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:31.028 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:31.028 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2008075 00:21:31.028 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:31.028 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:31.028 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2008075' 00:21:31.028 killing process with pid 2008075 00:21:31.028 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2008075 00:21:31.028 Received shutdown signal, test time was about 1.000000 seconds 00:21:31.028 00:21:31.028 Latency(us) 00:21:31.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.028 =================================================================================================================== 00:21:31.029 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:31.029 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2008075 00:21:31.289 21:11:58 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2007709 00:21:31.289 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2007709 ']' 00:21:31.289 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2007709 00:21:31.289 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:31.289 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:31.289 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2007709 00:21:31.289 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:31.289 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:31.289 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2007709' 00:21:31.289 killing process with pid 2007709 00:21:31.289 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2007709 00:21:31.289 [2024-07-15 21:11:58.470164] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:31.289 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2007709 00:21:31.550 21:11:58 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:21:31.550 21:11:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:31.550 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:31.550 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.550 21:11:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2008676 00:21:31.550 21:11:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2008676 00:21:31.550 21:11:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:31.550 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2008676 ']' 00:21:31.550 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.550 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.550 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.550 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.550 21:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.550 [2024-07-15 21:11:58.673930] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:21:31.550 [2024-07-15 21:11:58.674007] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.550 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.550 [2024-07-15 21:11:58.753408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.550 [2024-07-15 21:11:58.817802] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.550 [2024-07-15 21:11:58.817841] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.550 [2024-07-15 21:11:58.817849] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.550 [2024-07-15 21:11:58.817855] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.550 [2024-07-15 21:11:58.817861] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.550 [2024-07-15 21:11:58.817878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.491 21:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:32.491 21:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:32.491 21:11:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:32.491 21:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:32.491 21:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.491 21:11:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.491 21:11:59 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:21:32.491 21:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.491 21:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.491 [2024-07-15 21:11:59.476270] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.491 malloc0 00:21:32.491 [2024-07-15 21:11:59.503063] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:32.491 [2024-07-15 21:11:59.503269] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.491 21:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.491 21:11:59 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2008784 00:21:32.491 21:11:59 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2008784 /var/tmp/bdevperf.sock 00:21:32.491 21:11:59 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:32.491 21:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2008784 ']' 00:21:32.491 21:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:32.491 21:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:32.491 21:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:32.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:32.491 21:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:32.491 21:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.491 [2024-07-15 21:11:59.580503] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:21:32.492 [2024-07-15 21:11:59.580550] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2008784 ] 00:21:32.492 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.492 [2024-07-15 21:11:59.659381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.492 [2024-07-15 21:11:59.713203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.063 21:12:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:33.063 21:12:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:33.063 21:12:00 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0YxTVOOO5N 00:21:33.323 21:12:00 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:33.583 [2024-07-15 21:12:00.639547] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:33.583 nvme0n1 00:21:33.583 21:12:00 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:33.583 Running I/O for 1 seconds... 00:21:34.966 00:21:34.966 Latency(us) 00:21:34.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.966 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:34.966 Verification LBA range: start 0x0 length 0x2000 00:21:34.966 nvme0n1 : 1.03 4754.78 18.57 0.00 0.00 26586.51 6171.31 53957.97 00:21:34.966 =================================================================================================================== 00:21:34.966 Total : 4754.78 18.57 0.00 0.00 26586.51 6171.31 53957.97 00:21:34.966 0 00:21:34.966 21:12:01 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:21:34.966 21:12:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.966 21:12:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.966 21:12:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.966 21:12:01 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:21:34.966 "subsystems": [ 00:21:34.966 { 00:21:34.966 "subsystem": "keyring", 00:21:34.966 "config": [ 00:21:34.966 { 00:21:34.966 "method": "keyring_file_add_key", 00:21:34.966 "params": { 00:21:34.966 "name": "key0", 00:21:34.966 "path": "/tmp/tmp.0YxTVOOO5N" 00:21:34.966 } 00:21:34.966 } 00:21:34.966 ] 00:21:34.966 }, 00:21:34.966 { 00:21:34.966 "subsystem": "iobuf", 00:21:34.966 "config": [ 00:21:34.966 { 00:21:34.966 "method": "iobuf_set_options", 00:21:34.966 "params": { 00:21:34.966 "small_pool_count": 8192, 00:21:34.966 "large_pool_count": 1024, 00:21:34.966 "small_bufsize": 8192, 00:21:34.966 "large_bufsize": 135168 00:21:34.966 } 00:21:34.966 } 00:21:34.966 ] 00:21:34.966 }, 00:21:34.966 { 00:21:34.966 "subsystem": "sock", 00:21:34.966 "config": [ 00:21:34.966 { 00:21:34.966 "method": "sock_set_default_impl", 00:21:34.966 "params": { 00:21:34.966 "impl_name": "posix" 00:21:34.966 } 00:21:34.966 }, 00:21:34.966 { 00:21:34.966 "method": "sock_impl_set_options", 00:21:34.966 "params": { 00:21:34.966 "impl_name": "ssl", 00:21:34.966 "recv_buf_size": 4096, 00:21:34.966 "send_buf_size": 4096, 00:21:34.966 "enable_recv_pipe": true, 00:21:34.966 "enable_quickack": false, 00:21:34.966 "enable_placement_id": 0, 00:21:34.966 "enable_zerocopy_send_server": true, 00:21:34.966 "enable_zerocopy_send_client": false, 00:21:34.966 "zerocopy_threshold": 0, 00:21:34.966 "tls_version": 0, 00:21:34.966 "enable_ktls": false 00:21:34.966 } 00:21:34.966 }, 00:21:34.966 { 00:21:34.966 "method": "sock_impl_set_options", 00:21:34.966 "params": { 00:21:34.966 "impl_name": "posix", 00:21:34.966 "recv_buf_size": 2097152, 00:21:34.966 "send_buf_size": 2097152, 00:21:34.966 "enable_recv_pipe": true, 00:21:34.966 "enable_quickack": false, 00:21:34.966 "enable_placement_id": 0, 00:21:34.966 "enable_zerocopy_send_server": true, 00:21:34.967 "enable_zerocopy_send_client": false, 00:21:34.967 "zerocopy_threshold": 0, 00:21:34.967 "tls_version": 0, 00:21:34.967 "enable_ktls": false 00:21:34.967 } 00:21:34.967 } 00:21:34.967 ] 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "subsystem": "vmd", 00:21:34.967 "config": [] 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "subsystem": "accel", 00:21:34.967 "config": [ 00:21:34.967 { 00:21:34.967 "method": "accel_set_options", 00:21:34.967 "params": { 00:21:34.967 "small_cache_size": 128, 00:21:34.967 "large_cache_size": 16, 00:21:34.967 "task_count": 2048, 00:21:34.967 "sequence_count": 2048, 00:21:34.967 "buf_count": 2048 00:21:34.967 } 00:21:34.967 } 00:21:34.967 ] 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "subsystem": "bdev", 00:21:34.967 "config": [ 00:21:34.967 { 00:21:34.967 "method": "bdev_set_options", 00:21:34.967 "params": { 00:21:34.967 "bdev_io_pool_size": 65535, 00:21:34.967 "bdev_io_cache_size": 256, 00:21:34.967 "bdev_auto_examine": true, 00:21:34.967 "iobuf_small_cache_size": 128, 00:21:34.967 "iobuf_large_cache_size": 16 00:21:34.967 } 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "method": "bdev_raid_set_options", 00:21:34.967 "params": { 00:21:34.967 "process_window_size_kb": 1024 00:21:34.967 } 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "method": "bdev_iscsi_set_options", 00:21:34.967 "params": { 00:21:34.967 "timeout_sec": 30 00:21:34.967 } 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "method": "bdev_nvme_set_options", 00:21:34.967 "params": { 00:21:34.967 "action_on_timeout": "none", 00:21:34.967 "timeout_us": 0, 00:21:34.967 "timeout_admin_us": 0, 00:21:34.967 "keep_alive_timeout_ms": 10000, 00:21:34.967 "arbitration_burst": 0, 00:21:34.967 "low_priority_weight": 0, 00:21:34.967 "medium_priority_weight": 0, 00:21:34.967 "high_priority_weight": 0, 00:21:34.967 "nvme_adminq_poll_period_us": 10000, 00:21:34.967 "nvme_ioq_poll_period_us": 0, 00:21:34.967 "io_queue_requests": 0, 00:21:34.967 "delay_cmd_submit": true, 00:21:34.967 "transport_retry_count": 4, 00:21:34.967 "bdev_retry_count": 3, 00:21:34.967 "transport_ack_timeout": 0, 00:21:34.967 "ctrlr_loss_timeout_sec": 0, 00:21:34.967 "reconnect_delay_sec": 0, 00:21:34.967 "fast_io_fail_timeout_sec": 0, 00:21:34.967 "disable_auto_failback": false, 00:21:34.967 "generate_uuids": false, 00:21:34.967 "transport_tos": 0, 00:21:34.967 "nvme_error_stat": false, 00:21:34.967 "rdma_srq_size": 0, 00:21:34.967 "io_path_stat": false, 00:21:34.967 "allow_accel_sequence": false, 00:21:34.967 "rdma_max_cq_size": 0, 00:21:34.967 "rdma_cm_event_timeout_ms": 0, 00:21:34.967 "dhchap_digests": [ 00:21:34.967 "sha256", 00:21:34.967 "sha384", 00:21:34.967 "sha512" 00:21:34.967 ], 00:21:34.967 "dhchap_dhgroups": [ 00:21:34.967 "null", 00:21:34.967 "ffdhe2048", 00:21:34.967 "ffdhe3072", 00:21:34.967 "ffdhe4096", 00:21:34.967 "ffdhe6144", 00:21:34.967 "ffdhe8192" 00:21:34.967 ] 00:21:34.967 } 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "method": "bdev_nvme_set_hotplug", 00:21:34.967 "params": { 00:21:34.967 "period_us": 100000, 00:21:34.967 "enable": false 00:21:34.967 } 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "method": "bdev_malloc_create", 00:21:34.967 "params": { 00:21:34.967 "name": "malloc0", 00:21:34.967 "num_blocks": 8192, 00:21:34.967 "block_size": 4096, 00:21:34.967 "physical_block_size": 4096, 00:21:34.967 "uuid": "344b3efe-4923-4905-80a8-8281dc2dc07f", 00:21:34.967 "optimal_io_boundary": 0 00:21:34.967 } 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "method": "bdev_wait_for_examine" 00:21:34.967 } 00:21:34.967 ] 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "subsystem": "nbd", 00:21:34.967 "config": [] 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "subsystem": "scheduler", 00:21:34.967 "config": [ 00:21:34.967 { 00:21:34.967 "method": "framework_set_scheduler", 00:21:34.967 "params": { 00:21:34.967 "name": "static" 00:21:34.967 } 00:21:34.967 } 00:21:34.967 ] 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "subsystem": "nvmf", 00:21:34.967 "config": [ 00:21:34.967 { 00:21:34.967 "method": "nvmf_set_config", 00:21:34.967 "params": { 00:21:34.967 "discovery_filter": "match_any", 00:21:34.967 "admin_cmd_passthru": { 00:21:34.967 "identify_ctrlr": false 00:21:34.967 } 00:21:34.967 } 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "method": "nvmf_set_max_subsystems", 00:21:34.967 "params": { 00:21:34.967 "max_subsystems": 1024 00:21:34.967 } 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "method": "nvmf_set_crdt", 00:21:34.967 "params": { 00:21:34.967 "crdt1": 0, 00:21:34.967 "crdt2": 0, 00:21:34.967 "crdt3": 0 00:21:34.967 } 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "method": "nvmf_create_transport", 00:21:34.967 "params": { 00:21:34.967 "trtype": "TCP", 00:21:34.967 "max_queue_depth": 128, 00:21:34.967 "max_io_qpairs_per_ctrlr": 127, 00:21:34.967 "in_capsule_data_size": 4096, 00:21:34.967 "max_io_size": 131072, 00:21:34.967 "io_unit_size": 131072, 00:21:34.967 "max_aq_depth": 128, 00:21:34.967 "num_shared_buffers": 511, 00:21:34.967 "buf_cache_size": 4294967295, 00:21:34.967 "dif_insert_or_strip": false, 00:21:34.967 "zcopy": false, 00:21:34.967 "c2h_success": false, 00:21:34.967 "sock_priority": 0, 00:21:34.967 "abort_timeout_sec": 1, 00:21:34.967 "ack_timeout": 0, 00:21:34.967 "data_wr_pool_size": 0 00:21:34.967 } 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "method": "nvmf_create_subsystem", 00:21:34.967 "params": { 00:21:34.967 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.967 "allow_any_host": false, 00:21:34.967 "serial_number": "00000000000000000000", 00:21:34.967 "model_number": "SPDK bdev Controller", 00:21:34.967 "max_namespaces": 32, 00:21:34.967 "min_cntlid": 1, 00:21:34.967 "max_cntlid": 65519, 00:21:34.967 "ana_reporting": false 00:21:34.967 } 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "method": "nvmf_subsystem_add_host", 00:21:34.967 "params": { 00:21:34.967 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.967 "host": "nqn.2016-06.io.spdk:host1", 00:21:34.967 "psk": "key0" 00:21:34.967 } 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "method": "nvmf_subsystem_add_ns", 00:21:34.967 "params": { 00:21:34.967 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.967 "namespace": { 00:21:34.967 "nsid": 1, 00:21:34.967 "bdev_name": "malloc0", 00:21:34.967 "nguid": "344B3EFE4923490580A88281DC2DC07F", 00:21:34.967 "uuid": "344b3efe-4923-4905-80a8-8281dc2dc07f", 00:21:34.967 "no_auto_visible": false 00:21:34.967 } 00:21:34.967 } 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "method": "nvmf_subsystem_add_listener", 00:21:34.967 "params": { 00:21:34.967 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.967 "listen_address": { 00:21:34.967 "trtype": "TCP", 00:21:34.967 "adrfam": "IPv4", 00:21:34.967 "traddr": "10.0.0.2", 00:21:34.967 "trsvcid": "4420" 00:21:34.967 }, 00:21:34.967 "secure_channel": false, 00:21:34.967 "sock_impl": "ssl" 00:21:34.967 } 00:21:34.967 } 00:21:34.967 ] 00:21:34.967 } 00:21:34.967 ] 00:21:34.967 }' 00:21:34.967 21:12:01 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:34.967 21:12:02 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:21:34.967 "subsystems": [ 00:21:34.967 { 00:21:34.967 "subsystem": "keyring", 00:21:34.967 "config": [ 00:21:34.967 { 00:21:34.967 "method": "keyring_file_add_key", 00:21:34.967 "params": { 00:21:34.967 "name": "key0", 00:21:34.967 "path": "/tmp/tmp.0YxTVOOO5N" 00:21:34.967 } 00:21:34.967 } 00:21:34.967 ] 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "subsystem": "iobuf", 00:21:34.967 "config": [ 00:21:34.967 { 00:21:34.967 "method": "iobuf_set_options", 00:21:34.967 "params": { 00:21:34.967 "small_pool_count": 8192, 00:21:34.967 "large_pool_count": 1024, 00:21:34.967 "small_bufsize": 8192, 00:21:34.967 "large_bufsize": 135168 00:21:34.967 } 00:21:34.967 } 00:21:34.967 ] 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "subsystem": "sock", 00:21:34.967 "config": [ 00:21:34.967 { 00:21:34.967 "method": "sock_set_default_impl", 00:21:34.967 "params": { 00:21:34.967 "impl_name": "posix" 00:21:34.967 } 00:21:34.967 }, 00:21:34.967 { 00:21:34.967 "method": "sock_impl_set_options", 00:21:34.967 "params": { 00:21:34.967 "impl_name": "ssl", 00:21:34.967 "recv_buf_size": 4096, 00:21:34.967 "send_buf_size": 4096, 00:21:34.967 "enable_recv_pipe": true, 00:21:34.967 "enable_quickack": false, 00:21:34.967 "enable_placement_id": 0, 00:21:34.967 "enable_zerocopy_send_server": true, 00:21:34.967 "enable_zerocopy_send_client": false, 00:21:34.967 "zerocopy_threshold": 0, 00:21:34.967 "tls_version": 0, 00:21:34.967 "enable_ktls": false 00:21:34.967 } 00:21:34.967 }, 00:21:34.967 { 00:21:34.968 "method": "sock_impl_set_options", 00:21:34.968 "params": { 00:21:34.968 "impl_name": "posix", 00:21:34.968 "recv_buf_size": 2097152, 00:21:34.968 "send_buf_size": 2097152, 00:21:34.968 "enable_recv_pipe": true, 00:21:34.968 "enable_quickack": false, 00:21:34.968 "enable_placement_id": 0, 00:21:34.968 "enable_zerocopy_send_server": true, 00:21:34.968 "enable_zerocopy_send_client": false, 00:21:34.968 "zerocopy_threshold": 0, 00:21:34.968 "tls_version": 0, 00:21:34.968 "enable_ktls": false 00:21:34.968 } 00:21:34.968 } 00:21:34.968 ] 00:21:34.968 }, 00:21:34.968 { 00:21:34.968 "subsystem": "vmd", 00:21:34.968 "config": [] 00:21:34.968 }, 00:21:34.968 { 00:21:34.968 "subsystem": "accel", 00:21:34.968 "config": [ 00:21:34.968 { 00:21:34.968 "method": "accel_set_options", 00:21:34.968 "params": { 00:21:34.968 "small_cache_size": 128, 00:21:34.968 "large_cache_size": 16, 00:21:34.968 "task_count": 2048, 00:21:34.968 "sequence_count": 2048, 00:21:34.968 "buf_count": 2048 00:21:34.968 } 00:21:34.968 } 00:21:34.968 ] 00:21:34.968 }, 00:21:34.968 { 00:21:34.968 "subsystem": "bdev", 00:21:34.968 "config": [ 00:21:34.968 { 00:21:34.968 "method": "bdev_set_options", 00:21:34.968 "params": { 00:21:34.968 "bdev_io_pool_size": 65535, 00:21:34.968 "bdev_io_cache_size": 256, 00:21:34.968 "bdev_auto_examine": true, 00:21:34.968 "iobuf_small_cache_size": 128, 00:21:34.968 "iobuf_large_cache_size": 16 00:21:34.968 } 00:21:34.968 }, 00:21:34.968 { 00:21:34.968 "method": "bdev_raid_set_options", 00:21:34.968 "params": { 00:21:34.968 "process_window_size_kb": 1024 00:21:34.968 } 00:21:34.968 }, 00:21:34.968 { 00:21:34.968 "method": "bdev_iscsi_set_options", 00:21:34.968 "params": { 00:21:34.968 "timeout_sec": 30 00:21:34.968 } 00:21:34.968 }, 00:21:34.968 { 00:21:34.968 "method": "bdev_nvme_set_options", 00:21:34.968 "params": { 00:21:34.968 "action_on_timeout": "none", 00:21:34.968 "timeout_us": 0, 00:21:34.968 "timeout_admin_us": 0, 00:21:34.968 "keep_alive_timeout_ms": 10000, 00:21:34.968 "arbitration_burst": 0, 00:21:34.968 "low_priority_weight": 0, 00:21:34.968 "medium_priority_weight": 0, 00:21:34.968 "high_priority_weight": 0, 00:21:34.968 "nvme_adminq_poll_period_us": 10000, 00:21:34.968 "nvme_ioq_poll_period_us": 0, 00:21:34.968 "io_queue_requests": 512, 00:21:34.968 "delay_cmd_submit": true, 00:21:34.968 "transport_retry_count": 4, 00:21:34.968 "bdev_retry_count": 3, 00:21:34.968 "transport_ack_timeout": 0, 00:21:34.968 "ctrlr_loss_timeout_sec": 0, 00:21:34.968 "reconnect_delay_sec": 0, 00:21:34.968 "fast_io_fail_timeout_sec": 0, 00:21:34.968 "disable_auto_failback": false, 00:21:34.968 "generate_uuids": false, 00:21:34.968 "transport_tos": 0, 00:21:34.968 "nvme_error_stat": false, 00:21:34.968 "rdma_srq_size": 0, 00:21:34.968 "io_path_stat": false, 00:21:34.968 "allow_accel_sequence": false, 00:21:34.968 "rdma_max_cq_size": 0, 00:21:34.968 "rdma_cm_event_timeout_ms": 0, 00:21:34.968 "dhchap_digests": [ 00:21:34.968 "sha256", 00:21:34.968 "sha384", 00:21:34.968 "sha512" 00:21:34.968 ], 00:21:34.968 "dhchap_dhgroups": [ 00:21:34.968 "null", 00:21:34.968 "ffdhe2048", 00:21:34.968 "ffdhe3072", 00:21:34.968 "ffdhe4096", 00:21:34.968 "ffdhe6144", 00:21:34.968 "ffdhe8192" 00:21:34.968 ] 00:21:34.968 } 00:21:34.968 }, 00:21:34.968 { 00:21:34.968 "method": "bdev_nvme_attach_controller", 00:21:34.968 "params": { 00:21:34.968 "name": "nvme0", 00:21:34.968 "trtype": "TCP", 00:21:34.968 "adrfam": "IPv4", 00:21:34.968 "traddr": "10.0.0.2", 00:21:34.968 "trsvcid": "4420", 00:21:34.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.968 "prchk_reftag": false, 00:21:34.968 "prchk_guard": false, 00:21:34.968 "ctrlr_loss_timeout_sec": 0, 00:21:34.968 "reconnect_delay_sec": 0, 00:21:34.968 "fast_io_fail_timeout_sec": 0, 00:21:34.968 "psk": "key0", 00:21:34.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.968 "hdgst": false, 00:21:34.968 "ddgst": false 00:21:34.968 } 00:21:34.968 }, 00:21:34.968 { 00:21:34.968 "method": "bdev_nvme_set_hotplug", 00:21:34.968 "params": { 00:21:34.968 "period_us": 100000, 00:21:34.968 "enable": false 00:21:34.968 } 00:21:34.968 }, 00:21:34.968 { 00:21:34.968 "method": "bdev_enable_histogram", 00:21:34.968 "params": { 00:21:34.968 "name": "nvme0n1", 00:21:34.968 "enable": true 00:21:34.968 } 00:21:34.968 }, 00:21:34.968 { 00:21:34.968 "method": "bdev_wait_for_examine" 00:21:34.968 } 00:21:34.968 ] 00:21:34.968 }, 00:21:34.968 { 00:21:34.968 "subsystem": "nbd", 00:21:34.968 "config": [] 00:21:34.968 } 00:21:34.968 ] 00:21:34.968 }' 00:21:34.968 21:12:02 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 2008784 00:21:34.968 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2008784 ']' 00:21:34.968 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2008784 00:21:34.968 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:34.968 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:34.968 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2008784 00:21:35.229 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:35.229 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:35.229 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2008784' 00:21:35.229 killing process with pid 2008784 00:21:35.229 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2008784 00:21:35.229 Received shutdown signal, test time was about 1.000000 seconds 00:21:35.229 00:21:35.229 Latency(us) 00:21:35.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.229 =================================================================================================================== 00:21:35.229 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:35.229 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2008784 00:21:35.229 21:12:02 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 2008676 00:21:35.229 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2008676 ']' 00:21:35.229 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2008676 00:21:35.229 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:35.229 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:35.229 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2008676 00:21:35.229 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:35.229 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:35.229 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2008676' 00:21:35.229 killing process with pid 2008676 00:21:35.229 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2008676 00:21:35.229 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2008676 00:21:35.490 21:12:02 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:21:35.490 21:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.490 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:35.490 21:12:02 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:21:35.490 "subsystems": [ 00:21:35.490 { 00:21:35.490 "subsystem": "keyring", 00:21:35.490 "config": [ 00:21:35.490 { 00:21:35.490 "method": "keyring_file_add_key", 00:21:35.490 "params": { 00:21:35.490 "name": "key0", 00:21:35.490 "path": "/tmp/tmp.0YxTVOOO5N" 00:21:35.490 } 00:21:35.490 } 00:21:35.490 ] 00:21:35.490 }, 00:21:35.490 { 00:21:35.490 "subsystem": "iobuf", 00:21:35.490 "config": [ 00:21:35.490 { 00:21:35.490 "method": "iobuf_set_options", 00:21:35.490 "params": { 00:21:35.490 "small_pool_count": 8192, 00:21:35.490 "large_pool_count": 1024, 00:21:35.490 "small_bufsize": 8192, 00:21:35.490 "large_bufsize": 135168 00:21:35.490 } 00:21:35.490 } 00:21:35.490 ] 00:21:35.490 }, 00:21:35.490 { 00:21:35.490 "subsystem": "sock", 00:21:35.490 "config": [ 00:21:35.490 { 00:21:35.490 "method": "sock_set_default_impl", 00:21:35.490 "params": { 00:21:35.490 "impl_name": "posix" 00:21:35.490 } 00:21:35.490 }, 00:21:35.490 { 00:21:35.490 "method": "sock_impl_set_options", 00:21:35.490 "params": { 00:21:35.490 "impl_name": "ssl", 00:21:35.490 "recv_buf_size": 4096, 00:21:35.490 "send_buf_size": 4096, 00:21:35.490 "enable_recv_pipe": true, 00:21:35.490 "enable_quickack": false, 00:21:35.490 "enable_placement_id": 0, 00:21:35.490 "enable_zerocopy_send_server": true, 00:21:35.490 "enable_zerocopy_send_client": false, 00:21:35.490 "zerocopy_threshold": 0, 00:21:35.490 "tls_version": 0, 00:21:35.490 "enable_ktls": false 00:21:35.490 } 00:21:35.490 }, 00:21:35.490 { 00:21:35.490 "method": "sock_impl_set_options", 00:21:35.490 "params": { 00:21:35.490 "impl_name": "posix", 00:21:35.490 "recv_buf_size": 2097152, 00:21:35.490 "send_buf_size": 2097152, 00:21:35.490 "enable_recv_pipe": true, 00:21:35.490 "enable_quickack": false, 00:21:35.490 "enable_placement_id": 0, 00:21:35.490 "enable_zerocopy_send_server": true, 00:21:35.490 "enable_zerocopy_send_client": false, 00:21:35.490 "zerocopy_threshold": 0, 00:21:35.490 "tls_version": 0, 00:21:35.490 "enable_ktls": false 00:21:35.490 } 00:21:35.490 } 00:21:35.490 ] 00:21:35.490 }, 00:21:35.490 { 00:21:35.490 "subsystem": "vmd", 00:21:35.490 "config": [] 00:21:35.490 }, 00:21:35.490 { 00:21:35.490 "subsystem": "accel", 00:21:35.490 "config": [ 00:21:35.490 { 00:21:35.490 "method": "accel_set_options", 00:21:35.490 "params": { 00:21:35.490 "small_cache_size": 128, 00:21:35.490 "large_cache_size": 16, 00:21:35.490 "task_count": 2048, 00:21:35.490 "sequence_count": 2048, 00:21:35.490 "buf_count": 2048 00:21:35.490 } 00:21:35.490 } 00:21:35.490 ] 00:21:35.490 }, 00:21:35.490 { 00:21:35.490 "subsystem": "bdev", 00:21:35.490 "config": [ 00:21:35.490 { 00:21:35.490 "method": "bdev_set_options", 00:21:35.490 "params": { 00:21:35.490 "bdev_io_pool_size": 65535, 00:21:35.490 "bdev_io_cache_size": 256, 00:21:35.490 "bdev_auto_examine": true, 00:21:35.490 "iobuf_small_cache_size": 128, 00:21:35.490 "iobuf_large_cache_size": 16 00:21:35.490 } 00:21:35.490 }, 00:21:35.490 { 00:21:35.490 "method": "bdev_raid_set_options", 00:21:35.490 "params": { 00:21:35.490 "process_window_size_kb": 1024 00:21:35.490 } 00:21:35.490 }, 00:21:35.490 { 00:21:35.490 "method": "bdev_iscsi_set_options", 00:21:35.490 "params": { 00:21:35.490 "timeout_sec": 30 00:21:35.490 } 00:21:35.490 }, 00:21:35.490 { 00:21:35.490 "method": "bdev_nvme_set_options", 00:21:35.490 "params": { 00:21:35.490 "action_on_timeout": "none", 00:21:35.490 "timeout_us": 0, 00:21:35.490 "timeout_admin_us": 0, 00:21:35.490 "keep_alive_timeout_ms": 10000, 00:21:35.490 "arbitration_burst": 0, 00:21:35.490 "low_priority_weight": 0, 00:21:35.490 "medium_priority_weight": 0, 00:21:35.490 "high_priority_weight": 0, 00:21:35.490 "nvme_adminq_poll_period_us": 10000, 00:21:35.490 "nvme_ioq_poll_period_us": 0, 00:21:35.490 "io_queue_requests": 0, 00:21:35.490 "delay_cmd_submit": true, 00:21:35.490 "transport_retry_count": 4, 00:21:35.490 "bdev_retry_count": 3, 00:21:35.490 "transport_ack_timeout": 0, 00:21:35.490 "ctrlr_loss_timeout_sec": 0, 00:21:35.490 "reconnect_delay_sec": 0, 00:21:35.490 "fast_io_fail_timeout_sec": 0, 00:21:35.490 "disable_auto_failback": false, 00:21:35.490 "generate_uuids": false, 00:21:35.490 "transport_tos": 0, 00:21:35.490 "nvme_error_stat": false, 00:21:35.490 "rdma_srq_size": 0, 00:21:35.490 "io_path_stat": false, 00:21:35.490 "allow_accel_sequence": false, 00:21:35.490 "rdma_max_cq_size": 0, 00:21:35.490 "rdma_cm_event_timeout_ms": 0, 00:21:35.490 "dhchap_digests": [ 00:21:35.490 "sha256", 00:21:35.490 "sha384", 00:21:35.490 "sha512" 00:21:35.490 ], 00:21:35.490 "dhchap_dhgroups": [ 00:21:35.490 "null", 00:21:35.490 "ffdhe2048", 00:21:35.490 "ffdhe3072", 00:21:35.490 "ffdhe4096", 00:21:35.490 "ffdhe6144", 00:21:35.490 "ffdhe8192" 00:21:35.490 ] 00:21:35.490 } 00:21:35.490 }, 00:21:35.490 { 00:21:35.490 "method": "bdev_nvme_set_hotplug", 00:21:35.490 "params": { 00:21:35.490 "period_us": 100000, 00:21:35.490 "enable": false 00:21:35.490 } 00:21:35.490 }, 00:21:35.490 { 00:21:35.490 "method": "bdev_malloc_create", 00:21:35.490 "params": { 00:21:35.490 "name": "malloc0", 00:21:35.490 "num_blocks": 8192, 00:21:35.490 "block_size": 4096, 00:21:35.490 "physical_block_size": 4096, 00:21:35.490 "uuid": "344b3efe-4923-4905-80a8-8281dc2dc07f", 00:21:35.490 "optimal_io_boundary": 0 00:21:35.490 } 00:21:35.490 }, 00:21:35.490 { 00:21:35.490 "method": "bdev_wait_for_examine" 00:21:35.490 } 00:21:35.490 ] 00:21:35.490 }, 00:21:35.491 { 00:21:35.491 "subsystem": "nbd", 00:21:35.491 "config": [] 00:21:35.491 }, 00:21:35.491 { 00:21:35.491 "subsystem": "scheduler", 00:21:35.491 "config": [ 00:21:35.491 { 00:21:35.491 "method": "framework_set_scheduler", 00:21:35.491 "params": { 00:21:35.491 "name": "static" 00:21:35.491 } 00:21:35.491 } 00:21:35.491 ] 00:21:35.491 }, 00:21:35.491 { 00:21:35.491 "subsystem": "nvmf", 00:21:35.491 "config": [ 00:21:35.491 { 00:21:35.491 "method": "nvmf_set_config", 00:21:35.491 "params": { 00:21:35.491 "discovery_filter": "match_any", 00:21:35.491 "admin_cmd_passthru": { 00:21:35.491 "identify_ctrlr": false 00:21:35.491 } 00:21:35.491 } 00:21:35.491 }, 00:21:35.491 { 00:21:35.491 "method": "nvmf_set_max_subsystems", 00:21:35.491 "params": { 00:21:35.491 "max_subsystems": 1024 00:21:35.491 } 00:21:35.491 }, 00:21:35.491 { 00:21:35.491 "method": "nvmf_set_crdt", 00:21:35.491 "params": { 00:21:35.491 "crdt1": 0, 00:21:35.491 "crdt2": 0, 00:21:35.491 "crdt3": 0 00:21:35.491 } 00:21:35.491 }, 00:21:35.491 { 00:21:35.491 "method": "nvmf_create_transport", 00:21:35.491 "params": { 00:21:35.491 "trtype": "TCP", 00:21:35.491 "max_queue_depth": 128, 00:21:35.491 "max_io_qpairs_per_ctrlr": 127, 00:21:35.491 "in_capsule_data_size": 4096, 00:21:35.491 "max_io_size": 131072, 00:21:35.491 "io_unit_size": 131072, 00:21:35.491 "max_aq_depth": 128, 00:21:35.491 "num_shared_buffers": 511, 00:21:35.491 "buf_cache_size": 4294967295, 00:21:35.491 "dif_insert_or_strip": false, 00:21:35.491 "zcopy": false, 00:21:35.491 "c2h_success": false, 00:21:35.491 "sock_priority": 0, 00:21:35.491 "abort_timeout_sec": 1, 00:21:35.491 "ack_timeout": 0, 00:21:35.491 "data_wr_pool_size": 0 00:21:35.491 } 00:21:35.491 }, 00:21:35.491 { 00:21:35.491 "method": "nvmf_create_subsystem", 00:21:35.491 "params": { 00:21:35.491 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.491 "allow_any_host": false, 00:21:35.491 "serial_number": "00000000000000000000", 00:21:35.491 "model_number": "SPDK bdev Controller", 00:21:35.491 "max_namespaces": 32, 00:21:35.491 "min_cntlid": 1, 00:21:35.491 "max_cntlid": 65519, 00:21:35.491 "ana_reporting": false 00:21:35.491 } 00:21:35.491 }, 00:21:35.491 { 00:21:35.491 "method": "nvmf_subsystem_add_host", 00:21:35.491 "params": { 00:21:35.491 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.491 "host": "nqn.2016-06.io.spdk:host1", 00:21:35.491 "psk": "key0" 00:21:35.491 } 00:21:35.491 }, 00:21:35.491 { 00:21:35.491 "method": "nvmf_subsystem_add_ns", 00:21:35.491 "params": { 00:21:35.491 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.491 "namespace": { 00:21:35.491 "nsid": 1, 00:21:35.491 "bdev_name": "malloc0", 00:21:35.491 "nguid": "344B3EFE4923490580A88281DC2DC07F", 00:21:35.491 "uuid": "344b3efe-4923-4905-80a8-8281dc2dc07f", 00:21:35.491 "no_auto_visible": false 00:21:35.491 } 00:21:35.491 } 00:21:35.491 }, 00:21:35.491 { 00:21:35.491 "method": "nvmf_subsystem_add_listener", 00:21:35.491 "params": { 00:21:35.491 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.491 "listen_address": { 00:21:35.491 "trtype": "TCP", 00:21:35.491 "adrfam": "IPv4", 00:21:35.491 "traddr": "10.0.0.2", 00:21:35.491 "trsvcid": "4420" 00:21:35.491 }, 00:21:35.491 "secure_channel": false, 00:21:35.491 "sock_impl": "ssl" 00:21:35.491 } 00:21:35.491 } 00:21:35.491 ] 00:21:35.491 } 00:21:35.491 ] 00:21:35.491 }' 00:21:35.491 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.491 21:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2009573 00:21:35.491 21:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2009573 00:21:35.491 21:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:35.491 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2009573 ']' 00:21:35.491 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.491 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:35.491 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.491 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:35.491 21:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.491 [2024-07-15 21:12:02.647593] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:21:35.491 [2024-07-15 21:12:02.647652] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.491 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.491 [2024-07-15 21:12:02.719506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.751 [2024-07-15 21:12:02.784718] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.752 [2024-07-15 21:12:02.784755] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.752 [2024-07-15 21:12:02.784762] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.752 [2024-07-15 21:12:02.784769] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.752 [2024-07-15 21:12:02.784774] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.752 [2024-07-15 21:12:02.784822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.752 [2024-07-15 21:12:02.982068] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.752 [2024-07-15 21:12:03.014081] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:35.752 [2024-07-15 21:12:03.027411] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.322 21:12:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:36.322 21:12:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:36.322 21:12:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:36.322 21:12:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:36.322 21:12:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.322 21:12:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.322 21:12:03 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2009668 00:21:36.322 21:12:03 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2009668 /var/tmp/bdevperf.sock 00:21:36.322 21:12:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2009668 ']' 00:21:36.322 21:12:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.322 21:12:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.322 21:12:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.322 21:12:03 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:36.322 21:12:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.322 21:12:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.322 21:12:03 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:21:36.322 "subsystems": [ 00:21:36.322 { 00:21:36.322 "subsystem": "keyring", 00:21:36.322 "config": [ 00:21:36.322 { 00:21:36.322 "method": "keyring_file_add_key", 00:21:36.322 "params": { 00:21:36.322 "name": "key0", 00:21:36.322 "path": "/tmp/tmp.0YxTVOOO5N" 00:21:36.322 } 00:21:36.322 } 00:21:36.322 ] 00:21:36.322 }, 00:21:36.322 { 00:21:36.322 "subsystem": "iobuf", 00:21:36.322 "config": [ 00:21:36.322 { 00:21:36.322 "method": "iobuf_set_options", 00:21:36.322 "params": { 00:21:36.322 "small_pool_count": 8192, 00:21:36.322 "large_pool_count": 1024, 00:21:36.322 "small_bufsize": 8192, 00:21:36.322 "large_bufsize": 135168 00:21:36.322 } 00:21:36.322 } 00:21:36.322 ] 00:21:36.322 }, 00:21:36.322 { 00:21:36.322 "subsystem": "sock", 00:21:36.322 "config": [ 00:21:36.322 { 00:21:36.322 "method": "sock_set_default_impl", 00:21:36.322 "params": { 00:21:36.322 "impl_name": "posix" 00:21:36.322 } 00:21:36.322 }, 00:21:36.322 { 00:21:36.322 "method": "sock_impl_set_options", 00:21:36.322 "params": { 00:21:36.322 "impl_name": "ssl", 00:21:36.322 "recv_buf_size": 4096, 00:21:36.322 "send_buf_size": 4096, 00:21:36.322 "enable_recv_pipe": true, 00:21:36.322 "enable_quickack": false, 00:21:36.322 "enable_placement_id": 0, 00:21:36.322 "enable_zerocopy_send_server": true, 00:21:36.322 "enable_zerocopy_send_client": false, 00:21:36.322 "zerocopy_threshold": 0, 00:21:36.322 "tls_version": 0, 00:21:36.322 "enable_ktls": false 00:21:36.322 } 00:21:36.322 }, 00:21:36.322 { 00:21:36.322 "method": "sock_impl_set_options", 00:21:36.322 "params": { 00:21:36.322 "impl_name": "posix", 00:21:36.322 "recv_buf_size": 2097152, 00:21:36.322 "send_buf_size": 2097152, 00:21:36.322 "enable_recv_pipe": true, 00:21:36.322 "enable_quickack": false, 00:21:36.322 "enable_placement_id": 0, 00:21:36.322 "enable_zerocopy_send_server": true, 00:21:36.322 "enable_zerocopy_send_client": false, 00:21:36.322 "zerocopy_threshold": 0, 00:21:36.322 "tls_version": 0, 00:21:36.322 "enable_ktls": false 00:21:36.322 } 00:21:36.322 } 00:21:36.322 ] 00:21:36.322 }, 00:21:36.322 { 00:21:36.322 "subsystem": "vmd", 00:21:36.322 "config": [] 00:21:36.322 }, 00:21:36.322 { 00:21:36.322 "subsystem": "accel", 00:21:36.322 "config": [ 00:21:36.322 { 00:21:36.322 "method": "accel_set_options", 00:21:36.322 "params": { 00:21:36.322 "small_cache_size": 128, 00:21:36.322 "large_cache_size": 16, 00:21:36.322 "task_count": 2048, 00:21:36.322 "sequence_count": 2048, 00:21:36.322 "buf_count": 2048 00:21:36.322 } 00:21:36.322 } 00:21:36.322 ] 00:21:36.322 }, 00:21:36.322 { 00:21:36.322 "subsystem": "bdev", 00:21:36.322 "config": [ 00:21:36.322 { 00:21:36.322 "method": "bdev_set_options", 00:21:36.322 "params": { 00:21:36.322 "bdev_io_pool_size": 65535, 00:21:36.322 "bdev_io_cache_size": 256, 00:21:36.322 "bdev_auto_examine": true, 00:21:36.322 "iobuf_small_cache_size": 128, 00:21:36.322 "iobuf_large_cache_size": 16 00:21:36.322 } 00:21:36.322 }, 00:21:36.322 { 00:21:36.322 "method": "bdev_raid_set_options", 00:21:36.322 "params": { 00:21:36.322 "process_window_size_kb": 1024 00:21:36.323 } 00:21:36.323 }, 00:21:36.323 { 00:21:36.323 "method": "bdev_iscsi_set_options", 00:21:36.323 "params": { 00:21:36.323 "timeout_sec": 30 00:21:36.323 } 00:21:36.323 }, 00:21:36.323 { 00:21:36.323 "method": "bdev_nvme_set_options", 00:21:36.323 "params": { 00:21:36.323 "action_on_timeout": "none", 00:21:36.323 "timeout_us": 0, 00:21:36.323 "timeout_admin_us": 0, 00:21:36.323 "keep_alive_timeout_ms": 10000, 00:21:36.323 "arbitration_burst": 0, 00:21:36.323 "low_priority_weight": 0, 00:21:36.323 "medium_priority_weight": 0, 00:21:36.323 "high_priority_weight": 0, 00:21:36.323 "nvme_adminq_poll_period_us": 10000, 00:21:36.323 "nvme_ioq_poll_period_us": 0, 00:21:36.323 "io_queue_requests": 512, 00:21:36.323 "delay_cmd_submit": true, 00:21:36.323 "transport_retry_count": 4, 00:21:36.323 "bdev_retry_count": 3, 00:21:36.323 "transport_ack_timeout": 0, 00:21:36.323 "ctrlr_loss_timeout_sec": 0, 00:21:36.323 "reconnect_delay_sec": 0, 00:21:36.323 "fast_io_fail_timeout_sec": 0, 00:21:36.323 "disable_auto_failback": false, 00:21:36.323 "generate_uuids": false, 00:21:36.323 "transport_tos": 0, 00:21:36.323 "nvme_error_stat": false, 00:21:36.323 "rdma_srq_size": 0, 00:21:36.323 "io_path_stat": false, 00:21:36.323 "allow_accel_sequence": false, 00:21:36.323 "rdma_max_cq_size": 0, 00:21:36.323 "rdma_cm_event_timeout_ms": 0, 00:21:36.323 "dhchap_digests": [ 00:21:36.323 "sha256", 00:21:36.323 "sha384", 00:21:36.323 "sha512" 00:21:36.323 ], 00:21:36.323 "dhchap_dhgroups": [ 00:21:36.323 "null", 00:21:36.323 "ffdhe2048", 00:21:36.323 "ffdhe3072", 00:21:36.323 "ffdhe4096", 00:21:36.323 "ffdhe6144", 00:21:36.323 "ffdhe8192" 00:21:36.323 ] 00:21:36.323 } 00:21:36.323 }, 00:21:36.323 { 00:21:36.323 "method": "bdev_nvme_attach_controller", 00:21:36.323 "params": { 00:21:36.323 "name": "nvme0", 00:21:36.323 "trtype": "TCP", 00:21:36.323 "adrfam": "IPv4", 00:21:36.323 "traddr": "10.0.0.2", 00:21:36.323 "trsvcid": "4420", 00:21:36.323 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.323 "prchk_reftag": false, 00:21:36.323 "prchk_guard": false, 00:21:36.323 "ctrlr_loss_timeout_sec": 0, 00:21:36.323 "reconnect_delay_sec": 0, 00:21:36.323 "fast_io_fail_timeout_sec": 0, 00:21:36.323 "psk": "key0", 00:21:36.323 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:36.323 "hdgst": false, 00:21:36.323 "ddgst": false 00:21:36.323 } 00:21:36.323 }, 00:21:36.323 { 00:21:36.323 "method": "bdev_nvme_set_hotplug", 00:21:36.323 "params": { 00:21:36.323 "period_us": 100000, 00:21:36.323 "enable": false 00:21:36.323 } 00:21:36.323 }, 00:21:36.323 { 00:21:36.323 "method": "bdev_enable_histogram", 00:21:36.323 "params": { 00:21:36.323 "name": "nvme0n1", 00:21:36.323 "enable": true 00:21:36.323 } 00:21:36.323 }, 00:21:36.323 { 00:21:36.323 "method": "bdev_wait_for_examine" 00:21:36.323 } 00:21:36.323 ] 00:21:36.323 }, 00:21:36.323 { 00:21:36.323 "subsystem": "nbd", 00:21:36.323 "config": [] 00:21:36.323 } 00:21:36.323 ] 00:21:36.323 }' 00:21:36.323 [2024-07-15 21:12:03.503379] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:21:36.323 [2024-07-15 21:12:03.503429] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2009668 ] 00:21:36.323 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.323 [2024-07-15 21:12:03.582459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.583 [2024-07-15 21:12:03.636397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.583 [2024-07-15 21:12:03.770171] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:37.153 21:12:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.153 21:12:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:37.153 21:12:04 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:37.153 21:12:04 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:21:37.153 21:12:04 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.153 21:12:04 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:37.414 Running I/O for 1 seconds... 00:21:38.357 00:21:38.357 Latency(us) 00:21:38.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.357 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:38.357 Verification LBA range: start 0x0 length 0x2000 00:21:38.357 nvme0n1 : 1.02 3919.21 15.31 0.00 0.00 32311.60 6280.53 94371.84 00:21:38.357 =================================================================================================================== 00:21:38.357 Total : 3919.21 15.31 0.00 0.00 32311.60 6280.53 94371.84 00:21:38.357 0 00:21:38.357 21:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:21:38.357 21:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:21:38.357 21:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:38.357 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:38.357 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:38.357 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:38.357 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:38.357 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:38.357 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:38.357 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:38.357 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:38.357 nvmf_trace.0 00:21:38.357 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:38.357 21:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2009668 00:21:38.357 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2009668 ']' 00:21:38.357 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2009668 00:21:38.357 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:38.357 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:38.357 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2009668 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2009668' 00:21:38.618 killing process with pid 2009668 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2009668 00:21:38.618 Received shutdown signal, test time was about 1.000000 seconds 00:21:38.618 00:21:38.618 Latency(us) 00:21:38.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.618 =================================================================================================================== 00:21:38.618 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2009668 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:38.618 rmmod nvme_tcp 00:21:38.618 rmmod nvme_fabrics 00:21:38.618 rmmod nvme_keyring 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2009573 ']' 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2009573 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2009573 ']' 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2009573 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:38.618 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2009573 00:21:38.879 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:38.879 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:38.879 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2009573' 00:21:38.879 killing process with pid 2009573 00:21:38.879 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2009573 00:21:38.879 21:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2009573 00:21:38.879 21:12:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:38.879 21:12:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:38.879 21:12:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:38.879 21:12:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:38.879 21:12:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:38.879 21:12:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.879 21:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:38.879 21:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.427 21:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:41.427 21:12:08 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Vd1PaarX6F /tmp/tmp.K2Q2DfBJ4V /tmp/tmp.0YxTVOOO5N 00:21:41.427 00:21:41.427 real 1m23.707s 00:21:41.427 user 2m8.011s 00:21:41.427 sys 0m26.706s 00:21:41.427 21:12:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:41.427 21:12:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.427 ************************************ 00:21:41.427 END TEST nvmf_tls 00:21:41.427 ************************************ 00:21:41.427 21:12:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:41.427 21:12:08 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:41.427 21:12:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:41.427 21:12:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:41.427 21:12:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:41.427 ************************************ 00:21:41.427 START TEST nvmf_fips 00:21:41.427 ************************************ 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:41.427 * Looking for test storage... 00:21:41.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:41.427 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:41.428 Error setting digest 00:21:41.428 0012A9EA777F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:41.428 0012A9EA777F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:41.428 21:12:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:49.566 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:49.566 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:49.566 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:49.567 Found net devices under 0000:31:00.0: cvl_0_0 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:49.567 Found net devices under 0000:31:00.1: cvl_0_1 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:49.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:21:49.567 00:21:49.567 --- 10.0.0.2 ping statistics --- 00:21:49.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.567 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:21:49.567 00:21:49.567 --- 10.0.0.1 ping statistics --- 00:21:49.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.567 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2015424 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2015424 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2015424 ']' 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:49.567 21:12:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:49.567 [2024-07-15 21:12:16.820993] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:21:49.567 [2024-07-15 21:12:16.821064] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.828 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.828 [2024-07-15 21:12:16.917090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.828 [2024-07-15 21:12:17.008288] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.828 [2024-07-15 21:12:17.008353] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.828 [2024-07-15 21:12:17.008361] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.828 [2024-07-15 21:12:17.008368] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.828 [2024-07-15 21:12:17.008374] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.828 [2024-07-15 21:12:17.008399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.401 21:12:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.401 21:12:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:50.401 21:12:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:50.401 21:12:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:50.401 21:12:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:50.401 21:12:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.401 21:12:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:50.401 21:12:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:50.401 21:12:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:50.401 21:12:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:50.401 21:12:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:50.401 21:12:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:50.401 21:12:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:50.401 21:12:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:50.662 [2024-07-15 21:12:17.783588] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.662 [2024-07-15 21:12:17.799584] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:50.662 [2024-07-15 21:12:17.799822] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.662 [2024-07-15 21:12:17.829747] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:50.662 malloc0 00:21:50.662 21:12:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:50.662 21:12:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2015593 00:21:50.662 21:12:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2015593 /var/tmp/bdevperf.sock 00:21:50.662 21:12:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:50.662 21:12:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2015593 ']' 00:21:50.662 21:12:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.662 21:12:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.662 21:12:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.662 21:12:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.662 21:12:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:50.662 [2024-07-15 21:12:17.924375] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:21:50.662 [2024-07-15 21:12:17.924450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2015593 ] 00:21:50.923 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.923 [2024-07-15 21:12:17.990595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.923 [2024-07-15 21:12:18.054324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.495 21:12:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:51.495 21:12:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:51.496 21:12:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:51.755 [2024-07-15 21:12:18.821779] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.756 [2024-07-15 21:12:18.821854] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:51.756 TLSTESTn1 00:21:51.756 21:12:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:51.756 Running I/O for 10 seconds... 00:22:01.755 00:22:01.755 Latency(us) 00:22:01.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.755 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:01.755 Verification LBA range: start 0x0 length 0x2000 00:22:01.755 TLSTESTn1 : 10.01 5356.13 20.92 0.00 0.00 23865.00 6007.47 68594.35 00:22:01.756 =================================================================================================================== 00:22:01.756 Total : 5356.13 20.92 0.00 0.00 23865.00 6007.47 68594.35 00:22:01.756 0 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:02.016 nvmf_trace.0 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2015593 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2015593 ']' 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2015593 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2015593 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2015593' 00:22:02.016 killing process with pid 2015593 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2015593 00:22:02.016 Received shutdown signal, test time was about 10.000000 seconds 00:22:02.016 00:22:02.016 Latency(us) 00:22:02.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.016 =================================================================================================================== 00:22:02.016 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.016 [2024-07-15 21:12:29.207726] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:02.016 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2015593 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:02.277 rmmod nvme_tcp 00:22:02.277 rmmod nvme_fabrics 00:22:02.277 rmmod nvme_keyring 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2015424 ']' 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2015424 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2015424 ']' 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2015424 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2015424 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2015424' 00:22:02.277 killing process with pid 2015424 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2015424 00:22:02.277 [2024-07-15 21:12:29.442392] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2015424 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:02.277 21:12:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.820 21:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:04.820 21:12:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:04.820 00:22:04.820 real 0m23.426s 00:22:04.820 user 0m24.069s 00:22:04.820 sys 0m10.015s 00:22:04.820 21:12:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:04.820 21:12:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:04.820 ************************************ 00:22:04.820 END TEST nvmf_fips 00:22:04.820 ************************************ 00:22:04.820 21:12:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:04.820 21:12:31 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:22:04.820 21:12:31 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:22:04.820 21:12:31 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:22:04.820 21:12:31 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:22:04.820 21:12:31 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:22:04.820 21:12:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:13.079 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:13.079 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:13.079 Found net devices under 0000:31:00.0: cvl_0_0 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.079 21:12:39 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:13.080 21:12:39 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:13.080 21:12:39 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.080 21:12:39 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:13.080 Found net devices under 0000:31:00.1: cvl_0_1 00:22:13.080 21:12:39 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.080 21:12:39 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:13.080 21:12:39 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:13.080 21:12:39 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:22:13.080 21:12:39 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:13.080 21:12:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:13.080 21:12:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:13.080 21:12:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:13.080 ************************************ 00:22:13.080 START TEST nvmf_perf_adq 00:22:13.080 ************************************ 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:13.080 * Looking for test storage... 00:22:13.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:13.080 21:12:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:21.230 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:21.230 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:21.230 Found net devices under 0000:31:00.0: cvl_0_0 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:21.230 Found net devices under 0000:31:00.1: cvl_0_1 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:21.230 21:12:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:21.802 21:12:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:23.719 21:12:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.011 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:29.012 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:29.012 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:29.012 Found net devices under 0000:31:00.0: cvl_0_0 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:29.012 Found net devices under 0000:31:00.1: cvl_0_1 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:29.012 21:12:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:29.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:22:29.012 00:22:29.012 --- 10.0.0.2 ping statistics --- 00:22:29.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.012 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:29.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:22:29.012 00:22:29.012 --- 10.0.0.1 ping statistics --- 00:22:29.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.012 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2028374 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2028374 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2028374 ']' 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:29.012 21:12:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.012 [2024-07-15 21:12:56.207136] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:22:29.012 [2024-07-15 21:12:56.207203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.012 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.012 [2024-07-15 21:12:56.288104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:29.273 [2024-07-15 21:12:56.365497] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.273 [2024-07-15 21:12:56.365538] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.273 [2024-07-15 21:12:56.365546] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.273 [2024-07-15 21:12:56.365552] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.273 [2024-07-15 21:12:56.365558] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.273 [2024-07-15 21:12:56.365698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.273 [2024-07-15 21:12:56.365818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.273 [2024-07-15 21:12:56.365975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.273 [2024-07-15 21:12:56.365975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:29.845 21:12:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.845 21:12:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:29.845 21:12:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:29.845 21:12:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:29.845 21:12:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.845 21:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.845 21:12:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:29.845 21:12:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:29.845 21:12:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:29.845 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.845 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.845 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.845 21:12:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:29.845 21:12:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:29.845 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.846 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.846 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.846 21:12:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:29.846 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.846 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.107 [2024-07-15 21:12:57.152274] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.107 Malloc1 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.107 [2024-07-15 21:12:57.211681] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2028727 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:30.107 21:12:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:30.107 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.022 21:12:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:32.022 21:12:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.022 21:12:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:32.022 21:12:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.022 21:12:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:32.022 "tick_rate": 2400000000, 00:22:32.022 "poll_groups": [ 00:22:32.022 { 00:22:32.022 "name": "nvmf_tgt_poll_group_000", 00:22:32.022 "admin_qpairs": 1, 00:22:32.022 "io_qpairs": 1, 00:22:32.022 "current_admin_qpairs": 1, 00:22:32.022 "current_io_qpairs": 1, 00:22:32.022 "pending_bdev_io": 0, 00:22:32.022 "completed_nvme_io": 20295, 00:22:32.022 "transports": [ 00:22:32.022 { 00:22:32.022 "trtype": "TCP" 00:22:32.022 } 00:22:32.022 ] 00:22:32.022 }, 00:22:32.022 { 00:22:32.022 "name": "nvmf_tgt_poll_group_001", 00:22:32.022 "admin_qpairs": 0, 00:22:32.022 "io_qpairs": 1, 00:22:32.022 "current_admin_qpairs": 0, 00:22:32.022 "current_io_qpairs": 1, 00:22:32.022 "pending_bdev_io": 0, 00:22:32.022 "completed_nvme_io": 29537, 00:22:32.022 "transports": [ 00:22:32.022 { 00:22:32.022 "trtype": "TCP" 00:22:32.022 } 00:22:32.022 ] 00:22:32.022 }, 00:22:32.022 { 00:22:32.022 "name": "nvmf_tgt_poll_group_002", 00:22:32.022 "admin_qpairs": 0, 00:22:32.022 "io_qpairs": 1, 00:22:32.022 "current_admin_qpairs": 0, 00:22:32.022 "current_io_qpairs": 1, 00:22:32.022 "pending_bdev_io": 0, 00:22:32.022 "completed_nvme_io": 20771, 00:22:32.022 "transports": [ 00:22:32.022 { 00:22:32.022 "trtype": "TCP" 00:22:32.022 } 00:22:32.022 ] 00:22:32.022 }, 00:22:32.022 { 00:22:32.022 "name": "nvmf_tgt_poll_group_003", 00:22:32.022 "admin_qpairs": 0, 00:22:32.022 "io_qpairs": 1, 00:22:32.022 "current_admin_qpairs": 0, 00:22:32.022 "current_io_qpairs": 1, 00:22:32.022 "pending_bdev_io": 0, 00:22:32.022 "completed_nvme_io": 20985, 00:22:32.022 "transports": [ 00:22:32.022 { 00:22:32.022 "trtype": "TCP" 00:22:32.022 } 00:22:32.022 ] 00:22:32.022 } 00:22:32.022 ] 00:22:32.022 }' 00:22:32.022 21:12:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:32.022 21:12:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:32.022 21:12:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:32.022 21:12:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:32.022 21:12:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2028727 00:22:40.163 Initializing NVMe Controllers 00:22:40.163 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:40.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:40.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:40.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:40.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:40.163 Initialization complete. Launching workers. 00:22:40.163 ======================================================== 00:22:40.163 Latency(us) 00:22:40.163 Device Information : IOPS MiB/s Average min max 00:22:40.163 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11378.00 44.45 5626.10 2059.28 9718.79 00:22:40.163 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15108.40 59.02 4235.72 1242.00 8707.27 00:22:40.163 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13834.00 54.04 4626.39 973.74 10279.92 00:22:40.163 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13644.30 53.30 4690.24 1220.08 10761.20 00:22:40.164 ======================================================== 00:22:40.164 Total : 53964.69 210.80 4743.94 973.74 10761.20 00:22:40.164 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:40.164 rmmod nvme_tcp 00:22:40.164 rmmod nvme_fabrics 00:22:40.164 rmmod nvme_keyring 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2028374 ']' 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2028374 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2028374 ']' 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2028374 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2028374 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2028374' 00:22:40.164 killing process with pid 2028374 00:22:40.164 21:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2028374 00:22:40.424 21:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2028374 00:22:40.424 21:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:40.424 21:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:40.424 21:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:40.424 21:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:40.424 21:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:40.424 21:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.424 21:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:40.424 21:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.971 21:13:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:42.971 21:13:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:42.971 21:13:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:44.358 21:13:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:46.272 21:13:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:51.559 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:51.560 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:51.560 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:51.560 Found net devices under 0000:31:00.0: cvl_0_0 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:51.560 Found net devices under 0000:31:00.1: cvl_0_1 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:51.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:22:51.560 00:22:51.560 --- 10.0.0.2 ping statistics --- 00:22:51.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.560 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:22:51.560 00:22:51.560 --- 10.0.0.1 ping statistics --- 00:22:51.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.560 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:51.560 net.core.busy_poll = 1 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:51.560 net.core.busy_read = 1 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2033189 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2033189 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2033189 ']' 00:22:51.560 21:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:51.561 21:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.561 21:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.561 21:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.561 21:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.561 21:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:51.561 [2024-07-15 21:13:18.805877] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:22:51.561 [2024-07-15 21:13:18.805946] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.561 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.821 [2024-07-15 21:13:18.889492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.821 [2024-07-15 21:13:18.964196] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.821 [2024-07-15 21:13:18.964244] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.821 [2024-07-15 21:13:18.964257] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.821 [2024-07-15 21:13:18.964263] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.821 [2024-07-15 21:13:18.964269] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.821 [2024-07-15 21:13:18.964451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.821 [2024-07-15 21:13:18.964570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.821 [2024-07-15 21:13:18.964728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.821 [2024-07-15 21:13:18.964729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:52.394 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.394 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:52.394 21:13:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:52.394 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:52.394 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.394 21:13:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.394 21:13:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:52.394 21:13:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:52.394 21:13:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:52.394 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.394 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.394 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.394 21:13:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:52.394 21:13:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:52.394 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.394 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.394 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.394 21:13:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:52.394 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.394 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.654 [2024-07-15 21:13:19.751609] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.654 Malloc1 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.654 [2024-07-15 21:13:19.810998] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.654 21:13:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2033541 00:22:52.655 21:13:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:52.655 21:13:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:52.655 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.566 21:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:54.566 21:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.566 21:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.566 21:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.566 21:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:54.566 "tick_rate": 2400000000, 00:22:54.566 "poll_groups": [ 00:22:54.566 { 00:22:54.566 "name": "nvmf_tgt_poll_group_000", 00:22:54.566 "admin_qpairs": 1, 00:22:54.566 "io_qpairs": 1, 00:22:54.566 "current_admin_qpairs": 1, 00:22:54.566 "current_io_qpairs": 1, 00:22:54.566 "pending_bdev_io": 0, 00:22:54.566 "completed_nvme_io": 27225, 00:22:54.566 "transports": [ 00:22:54.566 { 00:22:54.566 "trtype": "TCP" 00:22:54.566 } 00:22:54.566 ] 00:22:54.566 }, 00:22:54.566 { 00:22:54.566 "name": "nvmf_tgt_poll_group_001", 00:22:54.566 "admin_qpairs": 0, 00:22:54.566 "io_qpairs": 3, 00:22:54.566 "current_admin_qpairs": 0, 00:22:54.566 "current_io_qpairs": 3, 00:22:54.566 "pending_bdev_io": 0, 00:22:54.566 "completed_nvme_io": 42417, 00:22:54.566 "transports": [ 00:22:54.566 { 00:22:54.566 "trtype": "TCP" 00:22:54.566 } 00:22:54.566 ] 00:22:54.566 }, 00:22:54.566 { 00:22:54.566 "name": "nvmf_tgt_poll_group_002", 00:22:54.566 "admin_qpairs": 0, 00:22:54.566 "io_qpairs": 0, 00:22:54.566 "current_admin_qpairs": 0, 00:22:54.566 "current_io_qpairs": 0, 00:22:54.566 "pending_bdev_io": 0, 00:22:54.566 "completed_nvme_io": 0, 00:22:54.566 "transports": [ 00:22:54.566 { 00:22:54.566 "trtype": "TCP" 00:22:54.566 } 00:22:54.566 ] 00:22:54.566 }, 00:22:54.566 { 00:22:54.566 "name": "nvmf_tgt_poll_group_003", 00:22:54.566 "admin_qpairs": 0, 00:22:54.566 "io_qpairs": 0, 00:22:54.566 "current_admin_qpairs": 0, 00:22:54.566 "current_io_qpairs": 0, 00:22:54.566 "pending_bdev_io": 0, 00:22:54.566 "completed_nvme_io": 0, 00:22:54.566 "transports": [ 00:22:54.566 { 00:22:54.566 "trtype": "TCP" 00:22:54.566 } 00:22:54.566 ] 00:22:54.566 } 00:22:54.566 ] 00:22:54.566 }' 00:22:54.566 21:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:54.566 21:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:54.826 21:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:54.826 21:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:54.826 21:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2033541 00:23:02.969 Initializing NVMe Controllers 00:23:02.969 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:02.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:02.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:02.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:02.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:02.970 Initialization complete. Launching workers. 00:23:02.970 ======================================================== 00:23:02.970 Latency(us) 00:23:02.970 Device Information : IOPS MiB/s Average min max 00:23:02.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7195.10 28.11 8907.11 1341.07 54757.68 00:23:02.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6626.00 25.88 9659.21 1439.72 54669.79 00:23:02.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 18454.99 72.09 3467.54 1239.43 8110.55 00:23:02.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8480.70 33.13 7546.54 1126.66 54302.71 00:23:02.970 ======================================================== 00:23:02.970 Total : 40756.78 159.21 6283.19 1126.66 54757.68 00:23:02.970 00:23:02.970 21:13:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:02.970 21:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:02.970 21:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:02.970 21:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:02.970 21:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:02.970 21:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:02.970 21:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:02.970 rmmod nvme_tcp 00:23:02.970 rmmod nvme_fabrics 00:23:02.970 rmmod nvme_keyring 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2033189 ']' 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2033189 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2033189 ']' 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2033189 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2033189 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2033189' 00:23:02.970 killing process with pid 2033189 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2033189 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2033189 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.970 21:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.516 21:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:05.516 21:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:05.516 00:23:05.516 real 0m52.767s 00:23:05.516 user 2m49.173s 00:23:05.516 sys 0m11.394s 00:23:05.516 21:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:05.516 21:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:05.516 ************************************ 00:23:05.516 END TEST nvmf_perf_adq 00:23:05.516 ************************************ 00:23:05.516 21:13:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:05.516 21:13:32 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:05.516 21:13:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:05.516 21:13:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:05.516 21:13:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:05.516 ************************************ 00:23:05.516 START TEST nvmf_shutdown 00:23:05.516 ************************************ 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:05.516 * Looking for test storage... 00:23:05.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:05.516 ************************************ 00:23:05.516 START TEST nvmf_shutdown_tc1 00:23:05.516 ************************************ 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:05.516 21:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:13.704 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:13.704 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:13.704 Found net devices under 0000:31:00.0: cvl_0_0 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:13.704 Found net devices under 0000:31:00.1: cvl_0_1 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.704 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:13.705 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:13.705 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:13.705 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:13.705 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:13.705 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.705 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:13.705 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:13.705 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:13.705 21:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:13.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:23:13.705 00:23:13.705 --- 10.0.0.2 ping statistics --- 00:23:13.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.705 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:13.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:23:13.705 00:23:13.705 --- 10.0.0.1 ping statistics --- 00:23:13.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.705 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2040126 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2040126 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2040126 ']' 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:13.705 21:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:13.705 [2024-07-15 21:13:40.343408] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:23:13.705 [2024-07-15 21:13:40.343472] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.705 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.705 [2024-07-15 21:13:40.438116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:13.705 [2024-07-15 21:13:40.534607] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.705 [2024-07-15 21:13:40.534666] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.705 [2024-07-15 21:13:40.534675] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.705 [2024-07-15 21:13:40.534682] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.705 [2024-07-15 21:13:40.534688] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.705 [2024-07-15 21:13:40.534827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.705 [2024-07-15 21:13:40.534999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:13.705 [2024-07-15 21:13:40.535501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:13.705 [2024-07-15 21:13:40.535503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:13.965 [2024-07-15 21:13:41.158699] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.965 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:13.965 Malloc1 00:23:14.225 [2024-07-15 21:13:41.262257] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.225 Malloc2 00:23:14.225 Malloc3 00:23:14.225 Malloc4 00:23:14.225 Malloc5 00:23:14.225 Malloc6 00:23:14.226 Malloc7 00:23:14.487 Malloc8 00:23:14.487 Malloc9 00:23:14.487 Malloc10 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2040412 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2040412 /var/tmp/bdevperf.sock 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2040412 ']' 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:14.487 { 00:23:14.487 "params": { 00:23:14.487 "name": "Nvme$subsystem", 00:23:14.487 "trtype": "$TEST_TRANSPORT", 00:23:14.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.487 "adrfam": "ipv4", 00:23:14.487 "trsvcid": "$NVMF_PORT", 00:23:14.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.487 "hdgst": ${hdgst:-false}, 00:23:14.487 "ddgst": ${ddgst:-false} 00:23:14.487 }, 00:23:14.487 "method": "bdev_nvme_attach_controller" 00:23:14.487 } 00:23:14.487 EOF 00:23:14.487 )") 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:14.487 { 00:23:14.487 "params": { 00:23:14.487 "name": "Nvme$subsystem", 00:23:14.487 "trtype": "$TEST_TRANSPORT", 00:23:14.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.487 "adrfam": "ipv4", 00:23:14.487 "trsvcid": "$NVMF_PORT", 00:23:14.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.487 "hdgst": ${hdgst:-false}, 00:23:14.487 "ddgst": ${ddgst:-false} 00:23:14.487 }, 00:23:14.487 "method": "bdev_nvme_attach_controller" 00:23:14.487 } 00:23:14.487 EOF 00:23:14.487 )") 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:14.487 { 00:23:14.487 "params": { 00:23:14.487 "name": "Nvme$subsystem", 00:23:14.487 "trtype": "$TEST_TRANSPORT", 00:23:14.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.487 "adrfam": "ipv4", 00:23:14.487 "trsvcid": "$NVMF_PORT", 00:23:14.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.487 "hdgst": ${hdgst:-false}, 00:23:14.487 "ddgst": ${ddgst:-false} 00:23:14.487 }, 00:23:14.487 "method": "bdev_nvme_attach_controller" 00:23:14.487 } 00:23:14.487 EOF 00:23:14.487 )") 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:14.487 { 00:23:14.487 "params": { 00:23:14.487 "name": "Nvme$subsystem", 00:23:14.487 "trtype": "$TEST_TRANSPORT", 00:23:14.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.487 "adrfam": "ipv4", 00:23:14.487 "trsvcid": "$NVMF_PORT", 00:23:14.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.487 "hdgst": ${hdgst:-false}, 00:23:14.487 "ddgst": ${ddgst:-false} 00:23:14.487 }, 00:23:14.487 "method": "bdev_nvme_attach_controller" 00:23:14.487 } 00:23:14.487 EOF 00:23:14.487 )") 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:14.487 { 00:23:14.487 "params": { 00:23:14.487 "name": "Nvme$subsystem", 00:23:14.487 "trtype": "$TEST_TRANSPORT", 00:23:14.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.487 "adrfam": "ipv4", 00:23:14.487 "trsvcid": "$NVMF_PORT", 00:23:14.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.487 "hdgst": ${hdgst:-false}, 00:23:14.487 "ddgst": ${ddgst:-false} 00:23:14.487 }, 00:23:14.487 "method": "bdev_nvme_attach_controller" 00:23:14.487 } 00:23:14.487 EOF 00:23:14.487 )") 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:14.487 { 00:23:14.487 "params": { 00:23:14.487 "name": "Nvme$subsystem", 00:23:14.487 "trtype": "$TEST_TRANSPORT", 00:23:14.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.487 "adrfam": "ipv4", 00:23:14.487 "trsvcid": "$NVMF_PORT", 00:23:14.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.487 "hdgst": ${hdgst:-false}, 00:23:14.487 "ddgst": ${ddgst:-false} 00:23:14.487 }, 00:23:14.487 "method": "bdev_nvme_attach_controller" 00:23:14.487 } 00:23:14.487 EOF 00:23:14.487 )") 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:14.487 [2024-07-15 21:13:41.718672] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:23:14.487 [2024-07-15 21:13:41.718724] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:14.487 { 00:23:14.487 "params": { 00:23:14.487 "name": "Nvme$subsystem", 00:23:14.487 "trtype": "$TEST_TRANSPORT", 00:23:14.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.487 "adrfam": "ipv4", 00:23:14.487 "trsvcid": "$NVMF_PORT", 00:23:14.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.487 "hdgst": ${hdgst:-false}, 00:23:14.487 "ddgst": ${ddgst:-false} 00:23:14.487 }, 00:23:14.487 "method": "bdev_nvme_attach_controller" 00:23:14.487 } 00:23:14.487 EOF 00:23:14.487 )") 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:14.487 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:14.487 { 00:23:14.487 "params": { 00:23:14.487 "name": "Nvme$subsystem", 00:23:14.487 "trtype": "$TEST_TRANSPORT", 00:23:14.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.487 "adrfam": "ipv4", 00:23:14.487 "trsvcid": "$NVMF_PORT", 00:23:14.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.487 "hdgst": ${hdgst:-false}, 00:23:14.487 "ddgst": ${ddgst:-false} 00:23:14.487 }, 00:23:14.487 "method": "bdev_nvme_attach_controller" 00:23:14.487 } 00:23:14.487 EOF 00:23:14.488 )") 00:23:14.488 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:14.488 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:14.488 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:14.488 { 00:23:14.488 "params": { 00:23:14.488 "name": "Nvme$subsystem", 00:23:14.488 "trtype": "$TEST_TRANSPORT", 00:23:14.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.488 "adrfam": "ipv4", 00:23:14.488 "trsvcid": "$NVMF_PORT", 00:23:14.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.488 "hdgst": ${hdgst:-false}, 00:23:14.488 "ddgst": ${ddgst:-false} 00:23:14.488 }, 00:23:14.488 "method": "bdev_nvme_attach_controller" 00:23:14.488 } 00:23:14.488 EOF 00:23:14.488 )") 00:23:14.488 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:14.488 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:14.488 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:14.488 { 00:23:14.488 "params": { 00:23:14.488 "name": "Nvme$subsystem", 00:23:14.488 "trtype": "$TEST_TRANSPORT", 00:23:14.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.488 "adrfam": "ipv4", 00:23:14.488 "trsvcid": "$NVMF_PORT", 00:23:14.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.488 "hdgst": ${hdgst:-false}, 00:23:14.488 "ddgst": ${ddgst:-false} 00:23:14.488 }, 00:23:14.488 "method": "bdev_nvme_attach_controller" 00:23:14.488 } 00:23:14.488 EOF 00:23:14.488 )") 00:23:14.488 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.488 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:14.488 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:14.488 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:14.488 21:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:14.488 "params": { 00:23:14.488 "name": "Nvme1", 00:23:14.488 "trtype": "tcp", 00:23:14.488 "traddr": "10.0.0.2", 00:23:14.488 "adrfam": "ipv4", 00:23:14.488 "trsvcid": "4420", 00:23:14.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:14.488 "hdgst": false, 00:23:14.488 "ddgst": false 00:23:14.488 }, 00:23:14.488 "method": "bdev_nvme_attach_controller" 00:23:14.488 },{ 00:23:14.488 "params": { 00:23:14.488 "name": "Nvme2", 00:23:14.488 "trtype": "tcp", 00:23:14.488 "traddr": "10.0.0.2", 00:23:14.488 "adrfam": "ipv4", 00:23:14.488 "trsvcid": "4420", 00:23:14.488 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:14.488 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:14.488 "hdgst": false, 00:23:14.488 "ddgst": false 00:23:14.488 }, 00:23:14.488 "method": "bdev_nvme_attach_controller" 00:23:14.488 },{ 00:23:14.488 "params": { 00:23:14.488 "name": "Nvme3", 00:23:14.488 "trtype": "tcp", 00:23:14.488 "traddr": "10.0.0.2", 00:23:14.488 "adrfam": "ipv4", 00:23:14.488 "trsvcid": "4420", 00:23:14.488 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:14.488 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:14.488 "hdgst": false, 00:23:14.488 "ddgst": false 00:23:14.488 }, 00:23:14.488 "method": "bdev_nvme_attach_controller" 00:23:14.488 },{ 00:23:14.488 "params": { 00:23:14.488 "name": "Nvme4", 00:23:14.488 "trtype": "tcp", 00:23:14.488 "traddr": "10.0.0.2", 00:23:14.488 "adrfam": "ipv4", 00:23:14.488 "trsvcid": "4420", 00:23:14.488 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:14.488 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:14.488 "hdgst": false, 00:23:14.488 "ddgst": false 00:23:14.488 }, 00:23:14.488 "method": "bdev_nvme_attach_controller" 00:23:14.488 },{ 00:23:14.488 "params": { 00:23:14.488 "name": "Nvme5", 00:23:14.488 "trtype": "tcp", 00:23:14.488 "traddr": "10.0.0.2", 00:23:14.488 "adrfam": "ipv4", 00:23:14.488 "trsvcid": "4420", 00:23:14.488 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:14.488 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:14.488 "hdgst": false, 00:23:14.488 "ddgst": false 00:23:14.488 }, 00:23:14.488 "method": "bdev_nvme_attach_controller" 00:23:14.488 },{ 00:23:14.488 "params": { 00:23:14.488 "name": "Nvme6", 00:23:14.488 "trtype": "tcp", 00:23:14.488 "traddr": "10.0.0.2", 00:23:14.488 "adrfam": "ipv4", 00:23:14.488 "trsvcid": "4420", 00:23:14.488 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:14.488 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:14.488 "hdgst": false, 00:23:14.488 "ddgst": false 00:23:14.488 }, 00:23:14.488 "method": "bdev_nvme_attach_controller" 00:23:14.488 },{ 00:23:14.488 "params": { 00:23:14.488 "name": "Nvme7", 00:23:14.488 "trtype": "tcp", 00:23:14.488 "traddr": "10.0.0.2", 00:23:14.488 "adrfam": "ipv4", 00:23:14.488 "trsvcid": "4420", 00:23:14.488 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:14.488 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:14.488 "hdgst": false, 00:23:14.488 "ddgst": false 00:23:14.488 }, 00:23:14.488 "method": "bdev_nvme_attach_controller" 00:23:14.488 },{ 00:23:14.488 "params": { 00:23:14.488 "name": "Nvme8", 00:23:14.488 "trtype": "tcp", 00:23:14.488 "traddr": "10.0.0.2", 00:23:14.488 "adrfam": "ipv4", 00:23:14.488 "trsvcid": "4420", 00:23:14.488 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:14.488 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:14.488 "hdgst": false, 00:23:14.488 "ddgst": false 00:23:14.488 }, 00:23:14.488 "method": "bdev_nvme_attach_controller" 00:23:14.488 },{ 00:23:14.488 "params": { 00:23:14.488 "name": "Nvme9", 00:23:14.488 "trtype": "tcp", 00:23:14.488 "traddr": "10.0.0.2", 00:23:14.488 "adrfam": "ipv4", 00:23:14.488 "trsvcid": "4420", 00:23:14.488 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:14.488 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:14.488 "hdgst": false, 00:23:14.488 "ddgst": false 00:23:14.488 }, 00:23:14.488 "method": "bdev_nvme_attach_controller" 00:23:14.488 },{ 00:23:14.488 "params": { 00:23:14.488 "name": "Nvme10", 00:23:14.488 "trtype": "tcp", 00:23:14.488 "traddr": "10.0.0.2", 00:23:14.488 "adrfam": "ipv4", 00:23:14.488 "trsvcid": "4420", 00:23:14.488 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:14.488 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:14.488 "hdgst": false, 00:23:14.488 "ddgst": false 00:23:14.488 }, 00:23:14.488 "method": "bdev_nvme_attach_controller" 00:23:14.488 }' 00:23:14.748 [2024-07-15 21:13:41.786001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.748 [2024-07-15 21:13:41.850649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.163 21:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:16.163 21:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:16.163 21:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:16.163 21:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.163 21:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:16.163 21:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.163 21:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2040412 00:23:16.163 21:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:16.163 21:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:17.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2040412 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2040126 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.109 { 00:23:17.109 "params": { 00:23:17.109 "name": "Nvme$subsystem", 00:23:17.109 "trtype": "$TEST_TRANSPORT", 00:23:17.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.109 "adrfam": "ipv4", 00:23:17.109 "trsvcid": "$NVMF_PORT", 00:23:17.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.109 "hdgst": ${hdgst:-false}, 00:23:17.109 "ddgst": ${ddgst:-false} 00:23:17.109 }, 00:23:17.109 "method": "bdev_nvme_attach_controller" 00:23:17.109 } 00:23:17.109 EOF 00:23:17.109 )") 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.109 { 00:23:17.109 "params": { 00:23:17.109 "name": "Nvme$subsystem", 00:23:17.109 "trtype": "$TEST_TRANSPORT", 00:23:17.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.109 "adrfam": "ipv4", 00:23:17.109 "trsvcid": "$NVMF_PORT", 00:23:17.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.109 "hdgst": ${hdgst:-false}, 00:23:17.109 "ddgst": ${ddgst:-false} 00:23:17.109 }, 00:23:17.109 "method": "bdev_nvme_attach_controller" 00:23:17.109 } 00:23:17.109 EOF 00:23:17.109 )") 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.109 { 00:23:17.109 "params": { 00:23:17.109 "name": "Nvme$subsystem", 00:23:17.109 "trtype": "$TEST_TRANSPORT", 00:23:17.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.109 "adrfam": "ipv4", 00:23:17.109 "trsvcid": "$NVMF_PORT", 00:23:17.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.109 "hdgst": ${hdgst:-false}, 00:23:17.109 "ddgst": ${ddgst:-false} 00:23:17.109 }, 00:23:17.109 "method": "bdev_nvme_attach_controller" 00:23:17.109 } 00:23:17.109 EOF 00:23:17.109 )") 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.109 { 00:23:17.109 "params": { 00:23:17.109 "name": "Nvme$subsystem", 00:23:17.109 "trtype": "$TEST_TRANSPORT", 00:23:17.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.109 "adrfam": "ipv4", 00:23:17.109 "trsvcid": "$NVMF_PORT", 00:23:17.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.109 "hdgst": ${hdgst:-false}, 00:23:17.109 "ddgst": ${ddgst:-false} 00:23:17.109 }, 00:23:17.109 "method": "bdev_nvme_attach_controller" 00:23:17.109 } 00:23:17.109 EOF 00:23:17.109 )") 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.109 { 00:23:17.109 "params": { 00:23:17.109 "name": "Nvme$subsystem", 00:23:17.109 "trtype": "$TEST_TRANSPORT", 00:23:17.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.109 "adrfam": "ipv4", 00:23:17.109 "trsvcid": "$NVMF_PORT", 00:23:17.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.109 "hdgst": ${hdgst:-false}, 00:23:17.109 "ddgst": ${ddgst:-false} 00:23:17.109 }, 00:23:17.109 "method": "bdev_nvme_attach_controller" 00:23:17.109 } 00:23:17.109 EOF 00:23:17.109 )") 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.109 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.109 { 00:23:17.109 "params": { 00:23:17.109 "name": "Nvme$subsystem", 00:23:17.109 "trtype": "$TEST_TRANSPORT", 00:23:17.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.109 "adrfam": "ipv4", 00:23:17.109 "trsvcid": "$NVMF_PORT", 00:23:17.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.109 "hdgst": ${hdgst:-false}, 00:23:17.109 "ddgst": ${ddgst:-false} 00:23:17.109 }, 00:23:17.109 "method": "bdev_nvme_attach_controller" 00:23:17.109 } 00:23:17.110 EOF 00:23:17.110 )") 00:23:17.110 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:17.110 [2024-07-15 21:13:44.123658] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:23:17.110 [2024-07-15 21:13:44.123713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2041005 ] 00:23:17.110 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.110 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.110 { 00:23:17.110 "params": { 00:23:17.110 "name": "Nvme$subsystem", 00:23:17.110 "trtype": "$TEST_TRANSPORT", 00:23:17.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.110 "adrfam": "ipv4", 00:23:17.110 "trsvcid": "$NVMF_PORT", 00:23:17.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.110 "hdgst": ${hdgst:-false}, 00:23:17.110 "ddgst": ${ddgst:-false} 00:23:17.110 }, 00:23:17.110 "method": "bdev_nvme_attach_controller" 00:23:17.110 } 00:23:17.110 EOF 00:23:17.110 )") 00:23:17.110 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:17.110 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.110 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.110 { 00:23:17.110 "params": { 00:23:17.110 "name": "Nvme$subsystem", 00:23:17.110 "trtype": "$TEST_TRANSPORT", 00:23:17.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.110 "adrfam": "ipv4", 00:23:17.110 "trsvcid": "$NVMF_PORT", 00:23:17.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.110 "hdgst": ${hdgst:-false}, 00:23:17.110 "ddgst": ${ddgst:-false} 00:23:17.110 }, 00:23:17.110 "method": "bdev_nvme_attach_controller" 00:23:17.110 } 00:23:17.110 EOF 00:23:17.110 )") 00:23:17.110 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:17.110 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.110 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.110 { 00:23:17.110 "params": { 00:23:17.110 "name": "Nvme$subsystem", 00:23:17.110 "trtype": "$TEST_TRANSPORT", 00:23:17.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.110 "adrfam": "ipv4", 00:23:17.110 "trsvcid": "$NVMF_PORT", 00:23:17.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.110 "hdgst": ${hdgst:-false}, 00:23:17.110 "ddgst": ${ddgst:-false} 00:23:17.110 }, 00:23:17.110 "method": "bdev_nvme_attach_controller" 00:23:17.110 } 00:23:17.110 EOF 00:23:17.110 )") 00:23:17.110 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:17.110 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.110 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.110 { 00:23:17.110 "params": { 00:23:17.110 "name": "Nvme$subsystem", 00:23:17.110 "trtype": "$TEST_TRANSPORT", 00:23:17.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.110 "adrfam": "ipv4", 00:23:17.110 "trsvcid": "$NVMF_PORT", 00:23:17.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.110 "hdgst": ${hdgst:-false}, 00:23:17.110 "ddgst": ${ddgst:-false} 00:23:17.110 }, 00:23:17.110 "method": "bdev_nvme_attach_controller" 00:23:17.110 } 00:23:17.110 EOF 00:23:17.110 )") 00:23:17.110 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:17.110 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.110 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:17.110 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:17.110 21:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:17.110 "params": { 00:23:17.110 "name": "Nvme1", 00:23:17.110 "trtype": "tcp", 00:23:17.110 "traddr": "10.0.0.2", 00:23:17.110 "adrfam": "ipv4", 00:23:17.110 "trsvcid": "4420", 00:23:17.110 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.110 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:17.110 "hdgst": false, 00:23:17.110 "ddgst": false 00:23:17.110 }, 00:23:17.110 "method": "bdev_nvme_attach_controller" 00:23:17.110 },{ 00:23:17.110 "params": { 00:23:17.110 "name": "Nvme2", 00:23:17.110 "trtype": "tcp", 00:23:17.110 "traddr": "10.0.0.2", 00:23:17.110 "adrfam": "ipv4", 00:23:17.110 "trsvcid": "4420", 00:23:17.110 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:17.110 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:17.110 "hdgst": false, 00:23:17.110 "ddgst": false 00:23:17.110 }, 00:23:17.110 "method": "bdev_nvme_attach_controller" 00:23:17.110 },{ 00:23:17.110 "params": { 00:23:17.110 "name": "Nvme3", 00:23:17.110 "trtype": "tcp", 00:23:17.110 "traddr": "10.0.0.2", 00:23:17.110 "adrfam": "ipv4", 00:23:17.110 "trsvcid": "4420", 00:23:17.110 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:17.110 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:17.110 "hdgst": false, 00:23:17.110 "ddgst": false 00:23:17.110 }, 00:23:17.110 "method": "bdev_nvme_attach_controller" 00:23:17.110 },{ 00:23:17.110 "params": { 00:23:17.110 "name": "Nvme4", 00:23:17.110 "trtype": "tcp", 00:23:17.111 "traddr": "10.0.0.2", 00:23:17.111 "adrfam": "ipv4", 00:23:17.111 "trsvcid": "4420", 00:23:17.111 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:17.111 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:17.111 "hdgst": false, 00:23:17.111 "ddgst": false 00:23:17.111 }, 00:23:17.111 "method": "bdev_nvme_attach_controller" 00:23:17.111 },{ 00:23:17.111 "params": { 00:23:17.111 "name": "Nvme5", 00:23:17.111 "trtype": "tcp", 00:23:17.111 "traddr": "10.0.0.2", 00:23:17.111 "adrfam": "ipv4", 00:23:17.111 "trsvcid": "4420", 00:23:17.111 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:17.111 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:17.111 "hdgst": false, 00:23:17.111 "ddgst": false 00:23:17.111 }, 00:23:17.111 "method": "bdev_nvme_attach_controller" 00:23:17.111 },{ 00:23:17.111 "params": { 00:23:17.111 "name": "Nvme6", 00:23:17.111 "trtype": "tcp", 00:23:17.111 "traddr": "10.0.0.2", 00:23:17.111 "adrfam": "ipv4", 00:23:17.111 "trsvcid": "4420", 00:23:17.111 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:17.111 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:17.111 "hdgst": false, 00:23:17.111 "ddgst": false 00:23:17.111 }, 00:23:17.111 "method": "bdev_nvme_attach_controller" 00:23:17.111 },{ 00:23:17.111 "params": { 00:23:17.111 "name": "Nvme7", 00:23:17.111 "trtype": "tcp", 00:23:17.111 "traddr": "10.0.0.2", 00:23:17.111 "adrfam": "ipv4", 00:23:17.111 "trsvcid": "4420", 00:23:17.111 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:17.111 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:17.111 "hdgst": false, 00:23:17.111 "ddgst": false 00:23:17.111 }, 00:23:17.111 "method": "bdev_nvme_attach_controller" 00:23:17.111 },{ 00:23:17.111 "params": { 00:23:17.111 "name": "Nvme8", 00:23:17.111 "trtype": "tcp", 00:23:17.111 "traddr": "10.0.0.2", 00:23:17.111 "adrfam": "ipv4", 00:23:17.111 "trsvcid": "4420", 00:23:17.111 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:17.111 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:17.111 "hdgst": false, 00:23:17.111 "ddgst": false 00:23:17.111 }, 00:23:17.111 "method": "bdev_nvme_attach_controller" 00:23:17.111 },{ 00:23:17.111 "params": { 00:23:17.111 "name": "Nvme9", 00:23:17.111 "trtype": "tcp", 00:23:17.111 "traddr": "10.0.0.2", 00:23:17.111 "adrfam": "ipv4", 00:23:17.111 "trsvcid": "4420", 00:23:17.111 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:17.111 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:17.111 "hdgst": false, 00:23:17.111 "ddgst": false 00:23:17.111 }, 00:23:17.111 "method": "bdev_nvme_attach_controller" 00:23:17.111 },{ 00:23:17.111 "params": { 00:23:17.111 "name": "Nvme10", 00:23:17.111 "trtype": "tcp", 00:23:17.111 "traddr": "10.0.0.2", 00:23:17.111 "adrfam": "ipv4", 00:23:17.111 "trsvcid": "4420", 00:23:17.111 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:17.111 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:17.111 "hdgst": false, 00:23:17.111 "ddgst": false 00:23:17.111 }, 00:23:17.111 "method": "bdev_nvme_attach_controller" 00:23:17.111 }' 00:23:17.111 [2024-07-15 21:13:44.190029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.111 [2024-07-15 21:13:44.254385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.496 Running I/O for 1 seconds... 00:23:19.440 00:23:19.440 Latency(us) 00:23:19.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.440 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.441 Verification LBA range: start 0x0 length 0x400 00:23:19.441 Nvme1n1 : 1.12 229.10 14.32 0.00 0.00 276365.87 18350.08 263891.63 00:23:19.441 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.441 Verification LBA range: start 0x0 length 0x400 00:23:19.441 Nvme2n1 : 1.08 237.09 14.82 0.00 0.00 257291.09 19988.48 241172.48 00:23:19.441 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.441 Verification LBA range: start 0x0 length 0x400 00:23:19.441 Nvme3n1 : 1.18 270.48 16.90 0.00 0.00 226658.13 15182.51 246415.36 00:23:19.441 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.441 Verification LBA range: start 0x0 length 0x400 00:23:19.441 Nvme4n1 : 1.09 235.63 14.73 0.00 0.00 254655.36 19114.67 244667.73 00:23:19.441 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.441 Verification LBA range: start 0x0 length 0x400 00:23:19.441 Nvme5n1 : 1.18 216.84 13.55 0.00 0.00 273357.01 27088.21 253405.87 00:23:19.441 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.441 Verification LBA range: start 0x0 length 0x400 00:23:19.441 Nvme6n1 : 1.19 267.81 16.74 0.00 0.00 217587.71 18459.31 232434.35 00:23:19.441 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.441 Verification LBA range: start 0x0 length 0x400 00:23:19.441 Nvme7n1 : 1.19 269.36 16.83 0.00 0.00 212472.66 19005.44 225443.84 00:23:19.441 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.441 Verification LBA range: start 0x0 length 0x400 00:23:19.441 Nvme8n1 : 1.20 266.62 16.66 0.00 0.00 211209.39 18131.63 244667.73 00:23:19.441 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.441 Verification LBA range: start 0x0 length 0x400 00:23:19.441 Nvme9n1 : 1.20 265.59 16.60 0.00 0.00 208432.47 17039.36 242920.11 00:23:19.441 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.441 Verification LBA range: start 0x0 length 0x400 00:23:19.441 Nvme10n1 : 1.17 218.18 13.64 0.00 0.00 248099.41 20425.39 258648.75 00:23:19.441 =================================================================================================================== 00:23:19.441 Total : 2476.69 154.79 0.00 0.00 236019.48 15182.51 263891.63 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:19.702 rmmod nvme_tcp 00:23:19.702 rmmod nvme_fabrics 00:23:19.702 rmmod nvme_keyring 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2040126 ']' 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2040126 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2040126 ']' 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2040126 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2040126 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2040126' 00:23:19.702 killing process with pid 2040126 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2040126 00:23:19.702 21:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2040126 00:23:19.964 21:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:19.964 21:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:19.964 21:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:19.964 21:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:19.964 21:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:19.964 21:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.964 21:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.964 21:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.512 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:22.512 00:23:22.512 real 0m16.715s 00:23:22.512 user 0m32.523s 00:23:22.512 sys 0m6.938s 00:23:22.512 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:22.512 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:22.512 ************************************ 00:23:22.512 END TEST nvmf_shutdown_tc1 00:23:22.512 ************************************ 00:23:22.512 21:13:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:22.512 21:13:49 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:22.512 21:13:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:22.512 21:13:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:22.512 21:13:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:22.512 ************************************ 00:23:22.512 START TEST nvmf_shutdown_tc2 00:23:22.512 ************************************ 00:23:22.512 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:22.512 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:22.512 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:22.512 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:22.512 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.512 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:22.512 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:22.512 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:22.512 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.512 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.512 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:22.513 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:22.513 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:22.513 Found net devices under 0000:31:00.0: cvl_0_0 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:22.513 Found net devices under 0000:31:00.1: cvl_0_1 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:22.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.705 ms 00:23:22.513 00:23:22.513 --- 10.0.0.2 ping statistics --- 00:23:22.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.513 rtt min/avg/max/mdev = 0.705/0.705/0.705/0.000 ms 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:22.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:23:22.513 00:23:22.513 --- 10.0.0.1 ping statistics --- 00:23:22.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.513 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:22.513 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:22.514 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:22.514 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2042211 00:23:22.514 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2042211 00:23:22.514 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:22.514 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2042211 ']' 00:23:22.514 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.514 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:22.514 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.514 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:22.514 21:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:22.514 [2024-07-15 21:13:49.791526] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:23:22.514 [2024-07-15 21:13:49.791574] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.775 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.775 [2024-07-15 21:13:49.880791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:22.775 [2024-07-15 21:13:49.936685] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.775 [2024-07-15 21:13:49.936719] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.775 [2024-07-15 21:13:49.936724] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.775 [2024-07-15 21:13:49.936729] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.775 [2024-07-15 21:13:49.936733] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.775 [2024-07-15 21:13:49.936848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.775 [2024-07-15 21:13:49.937009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:22.775 [2024-07-15 21:13:49.937430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.775 [2024-07-15 21:13:49.937431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:23.363 [2024-07-15 21:13:50.607644] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:23.363 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:23.622 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:23.622 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:23.622 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:23.622 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:23.622 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:23.622 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:23.622 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:23.622 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.622 21:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:23.622 Malloc1 00:23:23.622 [2024-07-15 21:13:50.706410] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.622 Malloc2 00:23:23.622 Malloc3 00:23:23.622 Malloc4 00:23:23.622 Malloc5 00:23:23.622 Malloc6 00:23:23.622 Malloc7 00:23:23.883 Malloc8 00:23:23.883 Malloc9 00:23:23.883 Malloc10 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2042501 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2042501 /var/tmp/bdevperf.sock 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2042501 ']' 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.883 { 00:23:23.883 "params": { 00:23:23.883 "name": "Nvme$subsystem", 00:23:23.883 "trtype": "$TEST_TRANSPORT", 00:23:23.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.883 "adrfam": "ipv4", 00:23:23.883 "trsvcid": "$NVMF_PORT", 00:23:23.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.883 "hdgst": ${hdgst:-false}, 00:23:23.883 "ddgst": ${ddgst:-false} 00:23:23.883 }, 00:23:23.883 "method": "bdev_nvme_attach_controller" 00:23:23.883 } 00:23:23.883 EOF 00:23:23.883 )") 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.883 { 00:23:23.883 "params": { 00:23:23.883 "name": "Nvme$subsystem", 00:23:23.883 "trtype": "$TEST_TRANSPORT", 00:23:23.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.883 "adrfam": "ipv4", 00:23:23.883 "trsvcid": "$NVMF_PORT", 00:23:23.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.883 "hdgst": ${hdgst:-false}, 00:23:23.883 "ddgst": ${ddgst:-false} 00:23:23.883 }, 00:23:23.883 "method": "bdev_nvme_attach_controller" 00:23:23.883 } 00:23:23.883 EOF 00:23:23.883 )") 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.883 { 00:23:23.883 "params": { 00:23:23.883 "name": "Nvme$subsystem", 00:23:23.883 "trtype": "$TEST_TRANSPORT", 00:23:23.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.883 "adrfam": "ipv4", 00:23:23.883 "trsvcid": "$NVMF_PORT", 00:23:23.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.883 "hdgst": ${hdgst:-false}, 00:23:23.883 "ddgst": ${ddgst:-false} 00:23:23.883 }, 00:23:23.883 "method": "bdev_nvme_attach_controller" 00:23:23.883 } 00:23:23.883 EOF 00:23:23.883 )") 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.883 { 00:23:23.883 "params": { 00:23:23.883 "name": "Nvme$subsystem", 00:23:23.883 "trtype": "$TEST_TRANSPORT", 00:23:23.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.883 "adrfam": "ipv4", 00:23:23.883 "trsvcid": "$NVMF_PORT", 00:23:23.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.883 "hdgst": ${hdgst:-false}, 00:23:23.883 "ddgst": ${ddgst:-false} 00:23:23.883 }, 00:23:23.883 "method": "bdev_nvme_attach_controller" 00:23:23.883 } 00:23:23.883 EOF 00:23:23.883 )") 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.883 { 00:23:23.883 "params": { 00:23:23.883 "name": "Nvme$subsystem", 00:23:23.883 "trtype": "$TEST_TRANSPORT", 00:23:23.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.883 "adrfam": "ipv4", 00:23:23.883 "trsvcid": "$NVMF_PORT", 00:23:23.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.883 "hdgst": ${hdgst:-false}, 00:23:23.883 "ddgst": ${ddgst:-false} 00:23:23.883 }, 00:23:23.883 "method": "bdev_nvme_attach_controller" 00:23:23.883 } 00:23:23.883 EOF 00:23:23.883 )") 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.883 { 00:23:23.883 "params": { 00:23:23.883 "name": "Nvme$subsystem", 00:23:23.883 "trtype": "$TEST_TRANSPORT", 00:23:23.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.883 "adrfam": "ipv4", 00:23:23.883 "trsvcid": "$NVMF_PORT", 00:23:23.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.883 "hdgst": ${hdgst:-false}, 00:23:23.883 "ddgst": ${ddgst:-false} 00:23:23.883 }, 00:23:23.883 "method": "bdev_nvme_attach_controller" 00:23:23.883 } 00:23:23.883 EOF 00:23:23.883 )") 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:23.883 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.884 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.884 { 00:23:23.884 "params": { 00:23:23.884 "name": "Nvme$subsystem", 00:23:23.884 "trtype": "$TEST_TRANSPORT", 00:23:23.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.884 "adrfam": "ipv4", 00:23:23.884 "trsvcid": "$NVMF_PORT", 00:23:23.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.884 "hdgst": ${hdgst:-false}, 00:23:23.884 "ddgst": ${ddgst:-false} 00:23:23.884 }, 00:23:23.884 "method": "bdev_nvme_attach_controller" 00:23:23.884 } 00:23:23.884 EOF 00:23:23.884 )") 00:23:23.884 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:23.884 [2024-07-15 21:13:51.162595] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:23:23.884 [2024-07-15 21:13:51.162658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2042501 ] 00:23:23.884 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.884 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.884 { 00:23:23.884 "params": { 00:23:23.884 "name": "Nvme$subsystem", 00:23:23.884 "trtype": "$TEST_TRANSPORT", 00:23:23.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.884 "adrfam": "ipv4", 00:23:23.884 "trsvcid": "$NVMF_PORT", 00:23:23.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.884 "hdgst": ${hdgst:-false}, 00:23:23.884 "ddgst": ${ddgst:-false} 00:23:23.884 }, 00:23:23.884 "method": "bdev_nvme_attach_controller" 00:23:23.884 } 00:23:23.884 EOF 00:23:23.884 )") 00:23:23.884 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:24.145 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:24.145 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:24.145 { 00:23:24.145 "params": { 00:23:24.145 "name": "Nvme$subsystem", 00:23:24.145 "trtype": "$TEST_TRANSPORT", 00:23:24.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.145 "adrfam": "ipv4", 00:23:24.145 "trsvcid": "$NVMF_PORT", 00:23:24.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.145 "hdgst": ${hdgst:-false}, 00:23:24.145 "ddgst": ${ddgst:-false} 00:23:24.145 }, 00:23:24.145 "method": "bdev_nvme_attach_controller" 00:23:24.145 } 00:23:24.145 EOF 00:23:24.145 )") 00:23:24.145 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:24.145 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:24.145 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:24.145 { 00:23:24.145 "params": { 00:23:24.145 "name": "Nvme$subsystem", 00:23:24.145 "trtype": "$TEST_TRANSPORT", 00:23:24.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.145 "adrfam": "ipv4", 00:23:24.145 "trsvcid": "$NVMF_PORT", 00:23:24.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.145 "hdgst": ${hdgst:-false}, 00:23:24.145 "ddgst": ${ddgst:-false} 00:23:24.145 }, 00:23:24.145 "method": "bdev_nvme_attach_controller" 00:23:24.145 } 00:23:24.145 EOF 00:23:24.145 )") 00:23:24.145 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:24.145 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:24.145 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:24.145 21:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:24.145 "params": { 00:23:24.145 "name": "Nvme1", 00:23:24.145 "trtype": "tcp", 00:23:24.145 "traddr": "10.0.0.2", 00:23:24.145 "adrfam": "ipv4", 00:23:24.145 "trsvcid": "4420", 00:23:24.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:24.145 "hdgst": false, 00:23:24.145 "ddgst": false 00:23:24.145 }, 00:23:24.145 "method": "bdev_nvme_attach_controller" 00:23:24.145 },{ 00:23:24.145 "params": { 00:23:24.145 "name": "Nvme2", 00:23:24.145 "trtype": "tcp", 00:23:24.145 "traddr": "10.0.0.2", 00:23:24.145 "adrfam": "ipv4", 00:23:24.145 "trsvcid": "4420", 00:23:24.145 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:24.145 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:24.145 "hdgst": false, 00:23:24.145 "ddgst": false 00:23:24.145 }, 00:23:24.145 "method": "bdev_nvme_attach_controller" 00:23:24.145 },{ 00:23:24.145 "params": { 00:23:24.145 "name": "Nvme3", 00:23:24.145 "trtype": "tcp", 00:23:24.145 "traddr": "10.0.0.2", 00:23:24.145 "adrfam": "ipv4", 00:23:24.145 "trsvcid": "4420", 00:23:24.145 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:24.145 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:24.145 "hdgst": false, 00:23:24.145 "ddgst": false 00:23:24.145 }, 00:23:24.145 "method": "bdev_nvme_attach_controller" 00:23:24.145 },{ 00:23:24.145 "params": { 00:23:24.145 "name": "Nvme4", 00:23:24.145 "trtype": "tcp", 00:23:24.145 "traddr": "10.0.0.2", 00:23:24.145 "adrfam": "ipv4", 00:23:24.145 "trsvcid": "4420", 00:23:24.145 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:24.145 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:24.145 "hdgst": false, 00:23:24.145 "ddgst": false 00:23:24.145 }, 00:23:24.145 "method": "bdev_nvme_attach_controller" 00:23:24.145 },{ 00:23:24.145 "params": { 00:23:24.145 "name": "Nvme5", 00:23:24.145 "trtype": "tcp", 00:23:24.145 "traddr": "10.0.0.2", 00:23:24.145 "adrfam": "ipv4", 00:23:24.145 "trsvcid": "4420", 00:23:24.145 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:24.145 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:24.145 "hdgst": false, 00:23:24.145 "ddgst": false 00:23:24.145 }, 00:23:24.145 "method": "bdev_nvme_attach_controller" 00:23:24.145 },{ 00:23:24.145 "params": { 00:23:24.145 "name": "Nvme6", 00:23:24.145 "trtype": "tcp", 00:23:24.145 "traddr": "10.0.0.2", 00:23:24.145 "adrfam": "ipv4", 00:23:24.145 "trsvcid": "4420", 00:23:24.145 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:24.145 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:24.145 "hdgst": false, 00:23:24.145 "ddgst": false 00:23:24.145 }, 00:23:24.145 "method": "bdev_nvme_attach_controller" 00:23:24.145 },{ 00:23:24.145 "params": { 00:23:24.145 "name": "Nvme7", 00:23:24.145 "trtype": "tcp", 00:23:24.145 "traddr": "10.0.0.2", 00:23:24.145 "adrfam": "ipv4", 00:23:24.145 "trsvcid": "4420", 00:23:24.145 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:24.145 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:24.145 "hdgst": false, 00:23:24.145 "ddgst": false 00:23:24.145 }, 00:23:24.145 "method": "bdev_nvme_attach_controller" 00:23:24.145 },{ 00:23:24.145 "params": { 00:23:24.145 "name": "Nvme8", 00:23:24.145 "trtype": "tcp", 00:23:24.145 "traddr": "10.0.0.2", 00:23:24.145 "adrfam": "ipv4", 00:23:24.145 "trsvcid": "4420", 00:23:24.145 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:24.145 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:24.145 "hdgst": false, 00:23:24.145 "ddgst": false 00:23:24.145 }, 00:23:24.145 "method": "bdev_nvme_attach_controller" 00:23:24.145 },{ 00:23:24.145 "params": { 00:23:24.145 "name": "Nvme9", 00:23:24.145 "trtype": "tcp", 00:23:24.145 "traddr": "10.0.0.2", 00:23:24.145 "adrfam": "ipv4", 00:23:24.145 "trsvcid": "4420", 00:23:24.145 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:24.145 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:24.145 "hdgst": false, 00:23:24.145 "ddgst": false 00:23:24.145 }, 00:23:24.145 "method": "bdev_nvme_attach_controller" 00:23:24.145 },{ 00:23:24.145 "params": { 00:23:24.145 "name": "Nvme10", 00:23:24.145 "trtype": "tcp", 00:23:24.145 "traddr": "10.0.0.2", 00:23:24.145 "adrfam": "ipv4", 00:23:24.145 "trsvcid": "4420", 00:23:24.145 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:24.145 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:24.145 "hdgst": false, 00:23:24.145 "ddgst": false 00:23:24.145 }, 00:23:24.145 "method": "bdev_nvme_attach_controller" 00:23:24.145 }' 00:23:24.145 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.145 [2024-07-15 21:13:51.231703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.145 [2024-07-15 21:13:51.296147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.057 Running I/O for 10 seconds... 00:23:26.057 21:13:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.057 21:13:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:26.057 21:13:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:26.057 21:13:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.057 21:13:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.057 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.057 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:26.057 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:26.057 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:26.057 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:26.057 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:26.057 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:26.057 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:26.057 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:26.057 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:26.057 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.057 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.057 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.057 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:26.057 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:26.057 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:26.316 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:26.316 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:26.316 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:26.317 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:26.317 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.317 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.317 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.317 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:26.317 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:26.317 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2042501 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2042501 ']' 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2042501 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2042501 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2042501' 00:23:26.577 killing process with pid 2042501 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2042501 00:23:26.577 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2042501 00:23:26.577 Received shutdown signal, test time was about 0.967377 seconds 00:23:26.577 00:23:26.577 Latency(us) 00:23:26.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.577 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.577 Verification LBA range: start 0x0 length 0x400 00:23:26.577 Nvme1n1 : 0.96 266.09 16.63 0.00 0.00 237453.01 19005.44 232434.35 00:23:26.577 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.577 Verification LBA range: start 0x0 length 0x400 00:23:26.577 Nvme2n1 : 0.94 203.81 12.74 0.00 0.00 302112.14 19770.03 251658.24 00:23:26.577 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.577 Verification LBA range: start 0x0 length 0x400 00:23:26.577 Nvme3n1 : 0.93 212.49 13.28 0.00 0.00 282974.55 3140.27 248162.99 00:23:26.577 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.577 Verification LBA range: start 0x0 length 0x400 00:23:26.577 Nvme4n1 : 0.95 269.43 16.84 0.00 0.00 220110.72 17476.27 255153.49 00:23:26.577 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.577 Verification LBA range: start 0x0 length 0x400 00:23:26.577 Nvme5n1 : 0.93 205.98 12.87 0.00 0.00 280614.12 37573.97 232434.35 00:23:26.577 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.577 Verification LBA range: start 0x0 length 0x400 00:23:26.577 Nvme6n1 : 0.97 264.88 16.55 0.00 0.00 214528.21 18022.40 253405.87 00:23:26.577 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.577 Verification LBA range: start 0x0 length 0x400 00:23:26.577 Nvme7n1 : 0.96 267.84 16.74 0.00 0.00 206713.81 19442.35 217579.52 00:23:26.577 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.577 Verification LBA range: start 0x0 length 0x400 00:23:26.577 Nvme8n1 : 0.96 266.37 16.65 0.00 0.00 203234.13 24576.00 249910.61 00:23:26.577 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.577 Verification LBA range: start 0x0 length 0x400 00:23:26.577 Nvme9n1 : 0.94 203.57 12.72 0.00 0.00 258633.10 16165.55 258648.75 00:23:26.577 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.577 Verification LBA range: start 0x0 length 0x400 00:23:26.577 Nvme10n1 : 0.95 201.09 12.57 0.00 0.00 256224.71 16056.32 272629.76 00:23:26.577 =================================================================================================================== 00:23:26.577 Total : 2361.55 147.60 0.00 0.00 242086.57 3140.27 272629.76 00:23:26.837 21:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:27.780 21:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2042211 00:23:27.780 21:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:27.780 21:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:27.780 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:27.780 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:27.780 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:27.780 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:27.780 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:27.780 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:27.780 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:27.780 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:27.780 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:27.780 rmmod nvme_tcp 00:23:27.780 rmmod nvme_fabrics 00:23:27.780 rmmod nvme_keyring 00:23:27.780 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:28.040 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:28.040 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:28.040 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2042211 ']' 00:23:28.040 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2042211 00:23:28.040 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2042211 ']' 00:23:28.040 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2042211 00:23:28.040 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:28.040 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:28.040 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2042211 00:23:28.040 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:28.040 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:28.040 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2042211' 00:23:28.040 killing process with pid 2042211 00:23:28.040 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2042211 00:23:28.040 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2042211 00:23:28.299 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:28.299 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:28.299 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:28.299 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:28.299 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:28.299 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.299 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:28.299 21:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.211 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:30.211 00:23:30.211 real 0m8.053s 00:23:30.211 user 0m24.608s 00:23:30.211 sys 0m1.246s 00:23:30.211 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:30.211 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:30.211 ************************************ 00:23:30.211 END TEST nvmf_shutdown_tc2 00:23:30.211 ************************************ 00:23:30.211 21:13:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:30.211 21:13:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:30.211 21:13:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:30.211 21:13:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:30.211 21:13:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:30.472 ************************************ 00:23:30.472 START TEST nvmf_shutdown_tc3 00:23:30.472 ************************************ 00:23:30.472 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:30.472 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:30.472 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:30.472 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:30.472 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.472 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:30.472 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:30.472 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:30.472 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:30.473 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:30.473 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:30.473 Found net devices under 0000:31:00.0: cvl_0_0 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:30.473 Found net devices under 0000:31:00.1: cvl_0_1 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:30.473 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:30.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:23:30.734 00:23:30.734 --- 10.0.0.2 ping statistics --- 00:23:30.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.734 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:30.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:23:30.734 00:23:30.734 --- 10.0.0.1 ping statistics --- 00:23:30.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.734 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2043895 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2043895 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2043895 ']' 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.734 21:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:30.734 [2024-07-15 21:13:57.970040] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:23:30.734 [2024-07-15 21:13:57.970100] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.734 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.995 [2024-07-15 21:13:58.064323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:30.995 [2024-07-15 21:13:58.132943] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.995 [2024-07-15 21:13:58.132981] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.995 [2024-07-15 21:13:58.132986] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.995 [2024-07-15 21:13:58.132991] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.995 [2024-07-15 21:13:58.132995] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.995 [2024-07-15 21:13:58.133107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.995 [2024-07-15 21:13:58.133280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.995 [2024-07-15 21:13:58.133715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.995 [2024-07-15 21:13:58.133714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:31.567 [2024-07-15 21:13:58.776321] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.567 21:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:31.567 Malloc1 00:23:31.828 [2024-07-15 21:13:58.875109] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.828 Malloc2 00:23:31.828 Malloc3 00:23:31.828 Malloc4 00:23:31.828 Malloc5 00:23:31.828 Malloc6 00:23:31.828 Malloc7 00:23:32.099 Malloc8 00:23:32.099 Malloc9 00:23:32.099 Malloc10 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2044122 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2044122 /var/tmp/bdevperf.sock 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2044122 ']' 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.099 { 00:23:32.099 "params": { 00:23:32.099 "name": "Nvme$subsystem", 00:23:32.099 "trtype": "$TEST_TRANSPORT", 00:23:32.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.099 "adrfam": "ipv4", 00:23:32.099 "trsvcid": "$NVMF_PORT", 00:23:32.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.099 "hdgst": ${hdgst:-false}, 00:23:32.099 "ddgst": ${ddgst:-false} 00:23:32.099 }, 00:23:32.099 "method": "bdev_nvme_attach_controller" 00:23:32.099 } 00:23:32.099 EOF 00:23:32.099 )") 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.099 { 00:23:32.099 "params": { 00:23:32.099 "name": "Nvme$subsystem", 00:23:32.099 "trtype": "$TEST_TRANSPORT", 00:23:32.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.099 "adrfam": "ipv4", 00:23:32.099 "trsvcid": "$NVMF_PORT", 00:23:32.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.099 "hdgst": ${hdgst:-false}, 00:23:32.099 "ddgst": ${ddgst:-false} 00:23:32.099 }, 00:23:32.099 "method": "bdev_nvme_attach_controller" 00:23:32.099 } 00:23:32.099 EOF 00:23:32.099 )") 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.099 { 00:23:32.099 "params": { 00:23:32.099 "name": "Nvme$subsystem", 00:23:32.099 "trtype": "$TEST_TRANSPORT", 00:23:32.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.099 "adrfam": "ipv4", 00:23:32.099 "trsvcid": "$NVMF_PORT", 00:23:32.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.099 "hdgst": ${hdgst:-false}, 00:23:32.099 "ddgst": ${ddgst:-false} 00:23:32.099 }, 00:23:32.099 "method": "bdev_nvme_attach_controller" 00:23:32.099 } 00:23:32.099 EOF 00:23:32.099 )") 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.099 { 00:23:32.099 "params": { 00:23:32.099 "name": "Nvme$subsystem", 00:23:32.099 "trtype": "$TEST_TRANSPORT", 00:23:32.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.099 "adrfam": "ipv4", 00:23:32.099 "trsvcid": "$NVMF_PORT", 00:23:32.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.099 "hdgst": ${hdgst:-false}, 00:23:32.099 "ddgst": ${ddgst:-false} 00:23:32.099 }, 00:23:32.099 "method": "bdev_nvme_attach_controller" 00:23:32.099 } 00:23:32.099 EOF 00:23:32.099 )") 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.099 { 00:23:32.099 "params": { 00:23:32.099 "name": "Nvme$subsystem", 00:23:32.099 "trtype": "$TEST_TRANSPORT", 00:23:32.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.099 "adrfam": "ipv4", 00:23:32.099 "trsvcid": "$NVMF_PORT", 00:23:32.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.099 "hdgst": ${hdgst:-false}, 00:23:32.099 "ddgst": ${ddgst:-false} 00:23:32.099 }, 00:23:32.099 "method": "bdev_nvme_attach_controller" 00:23:32.099 } 00:23:32.099 EOF 00:23:32.099 )") 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.099 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.099 { 00:23:32.099 "params": { 00:23:32.099 "name": "Nvme$subsystem", 00:23:32.099 "trtype": "$TEST_TRANSPORT", 00:23:32.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.099 "adrfam": "ipv4", 00:23:32.099 "trsvcid": "$NVMF_PORT", 00:23:32.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.100 "hdgst": ${hdgst:-false}, 00:23:32.100 "ddgst": ${ddgst:-false} 00:23:32.100 }, 00:23:32.100 "method": "bdev_nvme_attach_controller" 00:23:32.100 } 00:23:32.100 EOF 00:23:32.100 )") 00:23:32.100 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:32.100 [2024-07-15 21:13:59.322597] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:23:32.100 [2024-07-15 21:13:59.322650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2044122 ] 00:23:32.100 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.100 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.100 { 00:23:32.100 "params": { 00:23:32.100 "name": "Nvme$subsystem", 00:23:32.100 "trtype": "$TEST_TRANSPORT", 00:23:32.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.100 "adrfam": "ipv4", 00:23:32.100 "trsvcid": "$NVMF_PORT", 00:23:32.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.100 "hdgst": ${hdgst:-false}, 00:23:32.100 "ddgst": ${ddgst:-false} 00:23:32.100 }, 00:23:32.100 "method": "bdev_nvme_attach_controller" 00:23:32.100 } 00:23:32.100 EOF 00:23:32.100 )") 00:23:32.100 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:32.100 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.100 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.100 { 00:23:32.100 "params": { 00:23:32.100 "name": "Nvme$subsystem", 00:23:32.100 "trtype": "$TEST_TRANSPORT", 00:23:32.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.100 "adrfam": "ipv4", 00:23:32.100 "trsvcid": "$NVMF_PORT", 00:23:32.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.100 "hdgst": ${hdgst:-false}, 00:23:32.100 "ddgst": ${ddgst:-false} 00:23:32.100 }, 00:23:32.100 "method": "bdev_nvme_attach_controller" 00:23:32.100 } 00:23:32.100 EOF 00:23:32.100 )") 00:23:32.100 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:32.100 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.100 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.100 { 00:23:32.100 "params": { 00:23:32.100 "name": "Nvme$subsystem", 00:23:32.100 "trtype": "$TEST_TRANSPORT", 00:23:32.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.100 "adrfam": "ipv4", 00:23:32.100 "trsvcid": "$NVMF_PORT", 00:23:32.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.100 "hdgst": ${hdgst:-false}, 00:23:32.100 "ddgst": ${ddgst:-false} 00:23:32.100 }, 00:23:32.100 "method": "bdev_nvme_attach_controller" 00:23:32.100 } 00:23:32.100 EOF 00:23:32.100 )") 00:23:32.100 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:32.100 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.100 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.100 { 00:23:32.100 "params": { 00:23:32.100 "name": "Nvme$subsystem", 00:23:32.100 "trtype": "$TEST_TRANSPORT", 00:23:32.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.100 "adrfam": "ipv4", 00:23:32.100 "trsvcid": "$NVMF_PORT", 00:23:32.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.100 "hdgst": ${hdgst:-false}, 00:23:32.100 "ddgst": ${ddgst:-false} 00:23:32.100 }, 00:23:32.100 "method": "bdev_nvme_attach_controller" 00:23:32.100 } 00:23:32.100 EOF 00:23:32.100 )") 00:23:32.100 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:32.100 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.100 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:32.100 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:32.100 21:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:32.100 "params": { 00:23:32.100 "name": "Nvme1", 00:23:32.100 "trtype": "tcp", 00:23:32.100 "traddr": "10.0.0.2", 00:23:32.100 "adrfam": "ipv4", 00:23:32.100 "trsvcid": "4420", 00:23:32.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:32.100 "hdgst": false, 00:23:32.100 "ddgst": false 00:23:32.100 }, 00:23:32.100 "method": "bdev_nvme_attach_controller" 00:23:32.100 },{ 00:23:32.100 "params": { 00:23:32.100 "name": "Nvme2", 00:23:32.100 "trtype": "tcp", 00:23:32.100 "traddr": "10.0.0.2", 00:23:32.100 "adrfam": "ipv4", 00:23:32.100 "trsvcid": "4420", 00:23:32.100 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:32.100 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:32.100 "hdgst": false, 00:23:32.100 "ddgst": false 00:23:32.100 }, 00:23:32.100 "method": "bdev_nvme_attach_controller" 00:23:32.100 },{ 00:23:32.100 "params": { 00:23:32.100 "name": "Nvme3", 00:23:32.100 "trtype": "tcp", 00:23:32.100 "traddr": "10.0.0.2", 00:23:32.100 "adrfam": "ipv4", 00:23:32.100 "trsvcid": "4420", 00:23:32.100 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:32.100 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:32.100 "hdgst": false, 00:23:32.100 "ddgst": false 00:23:32.100 }, 00:23:32.100 "method": "bdev_nvme_attach_controller" 00:23:32.100 },{ 00:23:32.100 "params": { 00:23:32.100 "name": "Nvme4", 00:23:32.100 "trtype": "tcp", 00:23:32.100 "traddr": "10.0.0.2", 00:23:32.100 "adrfam": "ipv4", 00:23:32.100 "trsvcid": "4420", 00:23:32.100 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:32.100 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:32.100 "hdgst": false, 00:23:32.100 "ddgst": false 00:23:32.100 }, 00:23:32.100 "method": "bdev_nvme_attach_controller" 00:23:32.100 },{ 00:23:32.100 "params": { 00:23:32.100 "name": "Nvme5", 00:23:32.100 "trtype": "tcp", 00:23:32.100 "traddr": "10.0.0.2", 00:23:32.100 "adrfam": "ipv4", 00:23:32.100 "trsvcid": "4420", 00:23:32.100 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:32.100 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:32.100 "hdgst": false, 00:23:32.100 "ddgst": false 00:23:32.100 }, 00:23:32.100 "method": "bdev_nvme_attach_controller" 00:23:32.100 },{ 00:23:32.100 "params": { 00:23:32.100 "name": "Nvme6", 00:23:32.100 "trtype": "tcp", 00:23:32.100 "traddr": "10.0.0.2", 00:23:32.100 "adrfam": "ipv4", 00:23:32.100 "trsvcid": "4420", 00:23:32.100 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:32.100 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:32.100 "hdgst": false, 00:23:32.100 "ddgst": false 00:23:32.100 }, 00:23:32.100 "method": "bdev_nvme_attach_controller" 00:23:32.100 },{ 00:23:32.100 "params": { 00:23:32.100 "name": "Nvme7", 00:23:32.100 "trtype": "tcp", 00:23:32.100 "traddr": "10.0.0.2", 00:23:32.100 "adrfam": "ipv4", 00:23:32.100 "trsvcid": "4420", 00:23:32.100 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:32.100 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:32.100 "hdgst": false, 00:23:32.100 "ddgst": false 00:23:32.100 }, 00:23:32.100 "method": "bdev_nvme_attach_controller" 00:23:32.100 },{ 00:23:32.100 "params": { 00:23:32.100 "name": "Nvme8", 00:23:32.100 "trtype": "tcp", 00:23:32.100 "traddr": "10.0.0.2", 00:23:32.100 "adrfam": "ipv4", 00:23:32.100 "trsvcid": "4420", 00:23:32.100 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:32.100 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:32.100 "hdgst": false, 00:23:32.100 "ddgst": false 00:23:32.100 }, 00:23:32.100 "method": "bdev_nvme_attach_controller" 00:23:32.100 },{ 00:23:32.100 "params": { 00:23:32.100 "name": "Nvme9", 00:23:32.100 "trtype": "tcp", 00:23:32.100 "traddr": "10.0.0.2", 00:23:32.100 "adrfam": "ipv4", 00:23:32.100 "trsvcid": "4420", 00:23:32.100 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:32.100 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:32.100 "hdgst": false, 00:23:32.100 "ddgst": false 00:23:32.100 }, 00:23:32.100 "method": "bdev_nvme_attach_controller" 00:23:32.100 },{ 00:23:32.100 "params": { 00:23:32.100 "name": "Nvme10", 00:23:32.100 "trtype": "tcp", 00:23:32.100 "traddr": "10.0.0.2", 00:23:32.100 "adrfam": "ipv4", 00:23:32.100 "trsvcid": "4420", 00:23:32.100 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:32.100 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:32.100 "hdgst": false, 00:23:32.100 "ddgst": false 00:23:32.100 }, 00:23:32.100 "method": "bdev_nvme_attach_controller" 00:23:32.100 }' 00:23:32.361 [2024-07-15 21:13:59.391433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.361 [2024-07-15 21:13:59.456438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.745 Running I/O for 10 seconds... 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:33.745 21:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:34.006 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:34.006 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:34.006 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:34.006 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:34.006 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.006 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:34.006 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.006 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:34.006 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:34.006 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:34.266 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:34.266 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:34.267 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:34.267 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:34.267 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.267 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:34.543 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.543 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:34.543 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:34.543 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:34.543 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:34.543 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:34.543 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2043895 00:23:34.543 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2043895 ']' 00:23:34.543 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2043895 00:23:34.543 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:23:34.543 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:34.543 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2043895 00:23:34.543 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:34.543 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:34.543 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2043895' 00:23:34.543 killing process with pid 2043895 00:23:34.543 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2043895 00:23:34.543 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2043895 00:23:34.543 [2024-07-15 21:14:01.634968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.543 [2024-07-15 21:14:01.635015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.543 [2024-07-15 21:14:01.635021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.543 [2024-07-15 21:14:01.635026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.543 [2024-07-15 21:14:01.635031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.543 [2024-07-15 21:14:01.635035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.543 [2024-07-15 21:14:01.635040] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.543 [2024-07-15 21:14:01.635044] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.543 [2024-07-15 21:14:01.635049] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.543 [2024-07-15 21:14:01.635054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.543 [2024-07-15 21:14:01.635058] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.543 [2024-07-15 21:14:01.635063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.543 [2024-07-15 21:14:01.635067] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.543 [2024-07-15 21:14:01.635071] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.543 [2024-07-15 21:14:01.635076] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.543 [2024-07-15 21:14:01.635080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.543 [2024-07-15 21:14:01.635090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.543 [2024-07-15 21:14:01.635095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.543 [2024-07-15 21:14:01.635099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.543 [2024-07-15 21:14:01.635104] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635122] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635136] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635145] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635149] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635154] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635167] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635171] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635184] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635188] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635197] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635202] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635220] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635225] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635232] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635250] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635258] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635268] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635287] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635291] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.635301] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18971f0 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636419] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636442] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636460] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636469] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636478] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636492] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636510] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636528] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636533] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636542] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636546] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636555] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636575] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636580] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636590] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636599] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636604] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636613] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636617] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.544 [2024-07-15 21:14:01.636622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.636626] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.636631] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.636635] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.636640] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.636644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.636649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.636653] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.636658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.636662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.636667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.636671] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.636675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.636680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.636684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.636688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.636693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.636698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d780 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637631] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637642] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637647] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637665] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637683] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637796] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637827] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637866] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637877] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637886] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.637912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976d0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.639639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.639661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.639667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.639671] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.639676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.639680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.639685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.639690] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.639694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.639699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.545 [2024-07-15 21:14:01.639703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639716] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639721] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639738] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639796] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639814] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639835] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639859] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639864] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639877] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639886] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639899] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639934] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.639938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18980b0 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640443] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640448] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640452] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640467] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640478] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640492] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640500] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640509] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640522] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640540] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640566] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640578] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640587] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640591] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.546 [2024-07-15 21:14:01.640606] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640610] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640628] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640636] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640641] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640654] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640659] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640663] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640672] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640689] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.640711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898590 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641879] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641884] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641899] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641904] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641909] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641918] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641941] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641960] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641965] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641969] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641974] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641982] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.641996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642003] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642007] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642017] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642052] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642071] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642084] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642088] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.547 [2024-07-15 21:14:01.642098] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898a90 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.548 [2024-07-15 21:14:01.642310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.548 [2024-07-15 21:14:01.642327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.548 [2024-07-15 21:14:01.642347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.548 [2024-07-15 21:14:01.642363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ee8e0 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.548 [2024-07-15 21:14:01.642409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.548 [2024-07-15 21:14:01.642424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.548 [2024-07-15 21:14:01.642439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.548 [2024-07-15 21:14:01.642454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6150 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.548 [2024-07-15 21:14:01.642491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.548 [2024-07-15 21:14:01.642507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.548 [2024-07-15 21:14:01.642522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.548 [2024-07-15 21:14:01.642537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6d0 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.548 [2024-07-15 21:14:01.642571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with [2024-07-15 21:14:01.642573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:23:34.548 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.548 [2024-07-15 21:14:01.642588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with [2024-07-15 21:14:01.642593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:23:34.548 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642601] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.548 [2024-07-15 21:14:01.642606] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642617] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.548 [2024-07-15 21:14:01.642625] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10498c0 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-15 21:14:01.642659] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:34.548 the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 21:14:01.642669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with [2024-07-15 21:14:01.642679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:23:34.548 id:0 cdw10:00000000 cdw11:00000000 00:23:34.548 [2024-07-15 21:14:01.642690] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-15 21:14:01.642701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:34.548 the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-15 21:14:01.642719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:34.548 the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642726] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1188090 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-15 21:14:01.642761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:34.548 the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-15 21:14:01.642779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:34.548 the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.548 [2024-07-15 21:14:01.642793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.548 [2024-07-15 21:14:01.642798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with [2024-07-15 21:14:01.642798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:23:34.548 id:0 cdw10:00000000 cdw11:00000000 00:23:34.548 [2024-07-15 21:14:01.642805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 [2024-07-15 21:14:01.642810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.549 [2024-07-15 21:14:01.642820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 21:14:01.642825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11edef0 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642846] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642859] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642864] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.549 [2024-07-15 21:14:01.642868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642874] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with [2024-07-15 21:14:01.642873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:23:34.549 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 [2024-07-15 21:14:01.642880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.549 [2024-07-15 21:14:01.642887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 [2024-07-15 21:14:01.642897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.549 [2024-07-15 21:14:01.642906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 [2024-07-15 21:14:01.642912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642917] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.549 [2024-07-15 21:14:01.642922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 21:14:01.642927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898f70 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1034dd0 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.642957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.549 [2024-07-15 21:14:01.642965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 [2024-07-15 21:14:01.642974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.549 [2024-07-15 21:14:01.642981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 [2024-07-15 21:14:01.642989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.549 [2024-07-15 21:14:01.642996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 [2024-07-15 21:14:01.643004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.549 [2024-07-15 21:14:01.643012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 [2024-07-15 21:14:01.643019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1029c20 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.643425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.549 [2024-07-15 21:14:01.643445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 [2024-07-15 21:14:01.643461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.549 [2024-07-15 21:14:01.643468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 [2024-07-15 21:14:01.643478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.549 [2024-07-15 21:14:01.643486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 [2024-07-15 21:14:01.643496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.549 [2024-07-15 21:14:01.643503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 [2024-07-15 21:14:01.643508] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with [2024-07-15 21:14:01.643513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:12the state(5) to be set 00:23:34.549 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.549 [2024-07-15 21:14:01.643522] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with [2024-07-15 21:14:01.643522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:34.549 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 [2024-07-15 21:14:01.643530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.643535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with [2024-07-15 21:14:01.643534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:12the state(5) to be set 00:23:34.549 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.549 [2024-07-15 21:14:01.643541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.643544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 [2024-07-15 21:14:01.643546] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.643552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.643554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.549 [2024-07-15 21:14:01.643557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.643562] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.643562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 [2024-07-15 21:14:01.643567] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.643572] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with [2024-07-15 21:14:01.643572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:12the state(5) to be set 00:23:34.549 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.549 [2024-07-15 21:14:01.643579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with [2024-07-15 21:14:01.643581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:34.549 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 [2024-07-15 21:14:01.643588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.643593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.549 [2024-07-15 21:14:01.643597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.643600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 [2024-07-15 21:14:01.643602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.643608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.549 [2024-07-15 21:14:01.643610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.549 [2024-07-15 21:14:01.643618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 [2024-07-15 21:14:01.643627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.549 [2024-07-15 21:14:01.643634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 [2024-07-15 21:14:01.643643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.549 [2024-07-15 21:14:01.643650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.549 [2024-07-15 21:14:01.643659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.643675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.643692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.643708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.643725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.643741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.643759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.643776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.643792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.643808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.643824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.643840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.643857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.643873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.643889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.643905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.643921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.643938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.643958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.643976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.643992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.643999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.550 [2024-07-15 21:14:01.644349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.550 [2024-07-15 21:14:01.644356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:34.551 [2024-07-15 21:14:01.644567] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11740e0 was disconnected and freed. reset controller. 00:23:34.551 [2024-07-15 21:14:01.644680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.644990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.644998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.645008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.645015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.645024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.645031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.645040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.645047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.645056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.645063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.645072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.645080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.645089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.645096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.645105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.645112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.645121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.645128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.645138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.645147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.645158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.645166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.645175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.551 [2024-07-15 21:14:01.645182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.551 [2024-07-15 21:14:01.645191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.552 [2024-07-15 21:14:01.645198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.552 [2024-07-15 21:14:01.645207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.552 [2024-07-15 21:14:01.645215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.552 [2024-07-15 21:14:01.645224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.552 [2024-07-15 21:14:01.645236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.552 [2024-07-15 21:14:01.645246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.552 [2024-07-15 21:14:01.645253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.552 [2024-07-15 21:14:01.645281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.552 [2024-07-15 21:14:01.645324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.552 [2024-07-15 21:14:01.645374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.552 [2024-07-15 21:14:01.645424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.552 [2024-07-15 21:14:01.645473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.552 [2024-07-15 21:14:01.645520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.552 [2024-07-15 21:14:01.645568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.552 [2024-07-15 21:14:01.645613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.552 [2024-07-15 21:14:01.645662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.552 [2024-07-15 21:14:01.655553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655577] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655593] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655612] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655625] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655631] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655672] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655690] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655755] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655803] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655832] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.655855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899450 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.656454] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.656468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.656473] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.656478] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.656482] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.656486] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.656491] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.656495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.656500] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.656504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.656509] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.656513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.656520] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.656524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.656529] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.656534] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.656538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.656542] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.552 [2024-07-15 21:14:01.656547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656555] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656560] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656569] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656578] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656590] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656599] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656604] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656612] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656617] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656626] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656640] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656653] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656671] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656706] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656727] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.656741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899930 is same with the state(5) to be set 00:23:34.553 [2024-07-15 21:14:01.663732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.663775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.663785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.663799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.663806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.663816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.663823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.663833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.663840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.663849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.663856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.663865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.663872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.663881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.663888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.663898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.663905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.663914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.663921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.663930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.663937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.663947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.663954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.663963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.663970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.663979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.663986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.663995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.664004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.664013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.664020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.664029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.664036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.664045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.664052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.664061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.664068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.664078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.664084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.664094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.664101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.664111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.553 [2024-07-15 21:14:01.664119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.553 [2024-07-15 21:14:01.664128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.554 [2024-07-15 21:14:01.664135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.664144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.554 [2024-07-15 21:14:01.664151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.664160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.554 [2024-07-15 21:14:01.664167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.664177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.554 [2024-07-15 21:14:01.664184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.664258] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1174e10 was disconnected and freed. reset controller. 00:23:34.554 [2024-07-15 21:14:01.664522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.554 [2024-07-15 21:14:01.664544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.664553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.554 [2024-07-15 21:14:01.664560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.664568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.554 [2024-07-15 21:14:01.664575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.664583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.554 [2024-07-15 21:14:01.664590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.664598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1195350 is same with the state(5) to be set 00:23:34.554 [2024-07-15 21:14:01.664619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ee8e0 (9): Bad file descriptor 00:23:34.554 [2024-07-15 21:14:01.664633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e6150 (9): Bad file descriptor 00:23:34.554 [2024-07-15 21:14:01.664648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101a6d0 (9): Bad file descriptor 00:23:34.554 [2024-07-15 21:14:01.664663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10498c0 (9): Bad file descriptor 00:23:34.554 [2024-07-15 21:14:01.664680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1188090 (9): Bad file descriptor 00:23:34.554 [2024-07-15 21:14:01.664692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11edef0 (9): Bad file descriptor 00:23:34.554 [2024-07-15 21:14:01.664718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.554 [2024-07-15 21:14:01.664727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.664735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.554 [2024-07-15 21:14:01.664743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.664751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.554 [2024-07-15 21:14:01.664758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.664766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.554 [2024-07-15 21:14:01.664773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.664780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074e80 is same with the state(5) to be set 00:23:34.554 [2024-07-15 21:14:01.664798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1034dd0 (9): Bad file descriptor 00:23:34.554 [2024-07-15 21:14:01.664811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1029c20 (9): Bad file descriptor 00:23:34.554 [2024-07-15 21:14:01.667490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:34.554 [2024-07-15 21:14:01.667866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:34.554 [2024-07-15 21:14:01.668110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.554 [2024-07-15 21:14:01.668126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ee8e0 with addr=10.0.0.2, port=4420 00:23:34.554 [2024-07-15 21:14:01.668135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ee8e0 is same with the state(5) to be set 00:23:34.554 [2024-07-15 21:14:01.669505] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:34.554 [2024-07-15 21:14:01.669877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.554 [2024-07-15 21:14:01.669892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1034dd0 with addr=10.0.0.2, port=4420 00:23:34.554 [2024-07-15 21:14:01.669900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1034dd0 is same with the state(5) to be set 00:23:34.554 [2024-07-15 21:14:01.669911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ee8e0 (9): Bad file descriptor 00:23:34.554 [2024-07-15 21:14:01.669958] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:34.554 [2024-07-15 21:14:01.669997] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:34.554 [2024-07-15 21:14:01.670047] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:34.554 [2024-07-15 21:14:01.670084] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:34.554 [2024-07-15 21:14:01.670128] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:34.554 [2024-07-15 21:14:01.670270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.554 [2024-07-15 21:14:01.670285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.670299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.554 [2024-07-15 21:14:01.670307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.670317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.554 [2024-07-15 21:14:01.670324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.670333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.554 [2024-07-15 21:14:01.670340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.670350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.554 [2024-07-15 21:14:01.670357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.670366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.554 [2024-07-15 21:14:01.670374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.670384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.554 [2024-07-15 21:14:01.670391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.670401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.554 [2024-07-15 21:14:01.670411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.670421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.554 [2024-07-15 21:14:01.670428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.670437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.554 [2024-07-15 21:14:01.670445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.670454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.554 [2024-07-15 21:14:01.670461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.670471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.554 [2024-07-15 21:14:01.670478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.554 [2024-07-15 21:14:01.670486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1177790 is same with the state(5) to be set 00:23:34.554 [2024-07-15 21:14:01.670531] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1177790 was disconnected and freed. reset controller. 00:23:34.554 [2024-07-15 21:14:01.670573] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:34.554 [2024-07-15 21:14:01.670605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1034dd0 (9): Bad file descriptor 00:23:34.554 [2024-07-15 21:14:01.670616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:34.554 [2024-07-15 21:14:01.670623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:34.554 [2024-07-15 21:14:01.670633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:34.554 [2024-07-15 21:14:01.671662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.554 [2024-07-15 21:14:01.671676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:34.554 [2024-07-15 21:14:01.671688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1195350 (9): Bad file descriptor 00:23:34.554 [2024-07-15 21:14:01.671698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:34.554 [2024-07-15 21:14:01.671706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:34.554 [2024-07-15 21:14:01.671714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:34.554 [2024-07-15 21:14:01.671764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.554 [2024-07-15 21:14:01.672463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.554 [2024-07-15 21:14:01.672501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1195350 with addr=10.0.0.2, port=4420 00:23:34.554 [2024-07-15 21:14:01.672513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1195350 is same with the state(5) to be set 00:23:34.554 [2024-07-15 21:14:01.672580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1195350 (9): Bad file descriptor 00:23:34.554 [2024-07-15 21:14:01.672629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:34.555 [2024-07-15 21:14:01.672642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:34.555 [2024-07-15 21:14:01.672649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:34.555 [2024-07-15 21:14:01.672693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.555 [2024-07-15 21:14:01.674539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1074e80 (9): Bad file descriptor 00:23:34.555 [2024-07-15 21:14:01.674654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.674666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.674682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.674690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.674699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.674707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.674716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.674723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.674733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.674740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.674749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.674756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.674766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.674773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.674782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.674789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.674799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.674806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.674815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.674823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.674832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.674839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.674852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.674859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.674868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.674875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.674884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.674892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.674901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.674908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.674918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.674925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.674934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.674941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.674951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.674958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.674967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.674974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.674983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.674991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.555 [2024-07-15 21:14:01.675345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.555 [2024-07-15 21:14:01.675353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.675734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.675742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e54d0 is same with the state(5) to be set 00:23:34.556 [2024-07-15 21:14:01.677030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.677045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.677059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.677068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.677078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.677088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.677099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.677108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.677120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.677127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.677137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.677144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.677154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.677161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.677170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.677178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.677187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.677194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.677210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.677217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.677227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.677240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.677250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.677257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.677267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.556 [2024-07-15 21:14:01.677274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.556 [2024-07-15 21:14:01.677283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.557 [2024-07-15 21:14:01.677989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.557 [2024-07-15 21:14:01.677996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.678006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.678013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.678022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.678030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.678039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.678046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.678056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.678064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.678073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.678080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.678089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.678097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.678106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.678113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.678122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.678129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.678137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e6810 is same with the state(5) to be set 00:23:34.558 [2024-07-15 21:14:01.679412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.558 [2024-07-15 21:14:01.679987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.558 [2024-07-15 21:14:01.679994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.680491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.680499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1014ed0 is same with the state(5) to be set 00:23:34.559 [2024-07-15 21:14:01.681761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.681773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.681785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.681792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.681802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.681809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.681819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.681826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.681836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.681843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.681852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.681863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.681872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.681879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.681889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.681896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.681905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.681912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.681922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.681929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.681938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.681945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.681954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.559 [2024-07-15 21:14:01.681962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.559 [2024-07-15 21:14:01.681971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.681979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.681988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.681995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.560 [2024-07-15 21:14:01.682582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.560 [2024-07-15 21:14:01.682591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.682598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.682608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.682615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.682625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.682632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.682642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.682649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.682658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.682665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.682675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.682682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.682692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.682701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.682710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.682718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.682727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.682734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.682744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.682751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.682760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.682767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.682777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.682784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.682793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.682801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.682810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.682817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.682827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.682834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.682842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10163a0 is same with the state(5) to be set 00:23:34.561 [2024-07-15 21:14:01.684106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.561 [2024-07-15 21:14:01.684577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.561 [2024-07-15 21:14:01.684584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.684984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.684993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.685000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.685009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.685018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.685028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.685035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.685044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.685051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.685061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.685068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.685078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.685085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.685095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.685102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.685111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.685119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.685128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.685135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.685144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.685151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.685160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.685167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.685177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.685184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.685192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11762e0 is same with the state(5) to be set 00:23:34.562 [2024-07-15 21:14:01.686476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.686492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.686504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.686511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.686524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.686531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.686541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.686548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.686558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.686565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.562 [2024-07-15 21:14:01.686574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.562 [2024-07-15 21:14:01.686581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.686990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.686997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.687006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.687014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.687023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.687030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.687040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.687047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.687057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.687064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.687073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.687081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.687090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.687097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.687107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.687114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.687123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.687130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.687139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.687148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.687158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.687165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.687174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.687181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.687191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.687198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.687208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.687215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.687224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.687236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.687246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.687253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.687262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.687269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.687278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.563 [2024-07-15 21:14:01.687285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.563 [2024-07-15 21:14:01.687295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.687302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.687311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.687318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.687328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.687335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.687344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.687351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.687362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.687370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.687379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.687386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.687396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.687403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.687412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.687419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.687429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.687436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.687445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.687452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.687461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.687468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.687478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.687485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.687494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.687502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.687511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.687518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.687527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.687534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.687544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.687551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.687559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1179f30 is same with the state(5) to be set 00:23:34.564 [2024-07-15 21:14:01.690252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.564 [2024-07-15 21:14:01.690293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:34.564 [2024-07-15 21:14:01.690304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:34.564 [2024-07-15 21:14:01.690313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:34.564 [2024-07-15 21:14:01.690393] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:34.564 [2024-07-15 21:14:01.690414] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:34.564 [2024-07-15 21:14:01.690501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:34.564 [2024-07-15 21:14:01.690513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:34.564 [2024-07-15 21:14:01.690970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.564 [2024-07-15 21:14:01.690987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101a6d0 with addr=10.0.0.2, port=4420 00:23:34.564 [2024-07-15 21:14:01.690996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6d0 is same with the state(5) to be set 00:23:34.564 [2024-07-15 21:14:01.691373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.564 [2024-07-15 21:14:01.691383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e6150 with addr=10.0.0.2, port=4420 00:23:34.564 [2024-07-15 21:14:01.691391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6150 is same with the state(5) to be set 00:23:34.564 [2024-07-15 21:14:01.691803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.564 [2024-07-15 21:14:01.691812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10498c0 with addr=10.0.0.2, port=4420 00:23:34.564 [2024-07-15 21:14:01.691819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10498c0 is same with the state(5) to be set 00:23:34.564 [2024-07-15 21:14:01.692207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.564 [2024-07-15 21:14:01.692216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11edef0 with addr=10.0.0.2, port=4420 00:23:34.564 [2024-07-15 21:14:01.692223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11edef0 is same with the state(5) to be set 00:23:34.564 [2024-07-15 21:14:01.693580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.693595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.693611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.693619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.693628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.693636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.693645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.693653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.693668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.693676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.693685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.693693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.693702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.693709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.693718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.693726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.693735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.693742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.693752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.693759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.693769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.693776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.693785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.564 [2024-07-15 21:14:01.693793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.564 [2024-07-15 21:14:01.693802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.693809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.693819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.693826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.693835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.693843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.693852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.693859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.693868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.693876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.693887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.693894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.693904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.693911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.693920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.693928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.693937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.693944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.693953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.693961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.693970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.693977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.693986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.693993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.565 [2024-07-15 21:14:01.694501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.565 [2024-07-15 21:14:01.694510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.566 [2024-07-15 21:14:01.694519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.566 [2024-07-15 21:14:01.694529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.566 [2024-07-15 21:14:01.694536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.566 [2024-07-15 21:14:01.694546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.566 [2024-07-15 21:14:01.694554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.566 [2024-07-15 21:14:01.694563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.566 [2024-07-15 21:14:01.694571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.566 [2024-07-15 21:14:01.694580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.566 [2024-07-15 21:14:01.694588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.566 [2024-07-15 21:14:01.694597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.566 [2024-07-15 21:14:01.694604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.566 [2024-07-15 21:14:01.694614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.566 [2024-07-15 21:14:01.694621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.566 [2024-07-15 21:14:01.694630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.566 [2024-07-15 21:14:01.694638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.566 [2024-07-15 21:14:01.694647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.566 [2024-07-15 21:14:01.694654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.566 [2024-07-15 21:14:01.694664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.566 [2024-07-15 21:14:01.694671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.566 [2024-07-15 21:14:01.694679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1178a80 is same with the state(5) to be set 00:23:34.566 [2024-07-15 21:14:01.696432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:34.566 [2024-07-15 21:14:01.696455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:34.566 [2024-07-15 21:14:01.696464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:34.566 task offset: 24576 on job bdev=Nvme3n1 fails 00:23:34.566 00:23:34.566 Latency(us) 00:23:34.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.566 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.566 Job: Nvme1n1 ended in about 0.95 seconds with error 00:23:34.566 Verification LBA range: start 0x0 length 0x400 00:23:34.566 Nvme1n1 : 0.95 134.28 8.39 67.14 0.00 314376.82 17803.95 242920.11 00:23:34.566 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.566 Job: Nvme2n1 ended in about 0.96 seconds with error 00:23:34.566 Verification LBA range: start 0x0 length 0x400 00:23:34.566 Nvme2n1 : 0.96 200.92 12.56 66.97 0.00 231489.71 40850.77 227191.47 00:23:34.566 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.566 Job: Nvme3n1 ended in about 0.94 seconds with error 00:23:34.566 Verification LBA range: start 0x0 length 0x400 00:23:34.566 Nvme3n1 : 0.94 203.74 12.73 67.91 0.00 223472.85 21517.65 248162.99 00:23:34.566 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.566 Job: Nvme4n1 ended in about 0.96 seconds with error 00:23:34.566 Verification LBA range: start 0x0 length 0x400 00:23:34.566 Nvme4n1 : 0.96 200.43 12.53 66.81 0.00 222646.19 20097.71 244667.73 00:23:34.566 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.566 Job: Nvme5n1 ended in about 0.96 seconds with error 00:23:34.566 Verification LBA range: start 0x0 length 0x400 00:23:34.566 Nvme5n1 : 0.96 199.94 12.50 66.65 0.00 218505.39 14527.15 244667.73 00:23:34.566 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.566 Job: Nvme6n1 ended in about 0.94 seconds with error 00:23:34.566 Verification LBA range: start 0x0 length 0x400 00:23:34.566 Nvme6n1 : 0.94 203.47 12.72 67.82 0.00 209587.84 18786.99 239424.85 00:23:34.566 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.566 Job: Nvme7n1 ended in about 0.96 seconds with error 00:23:34.566 Verification LBA range: start 0x0 length 0x400 00:23:34.566 Nvme7n1 : 0.96 203.60 12.73 66.48 0.00 206363.20 18350.08 214958.08 00:23:34.566 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.566 Job: Nvme8n1 ended in about 0.95 seconds with error 00:23:34.566 Verification LBA range: start 0x0 length 0x400 00:23:34.566 Nvme8n1 : 0.95 197.27 12.33 12.66 0.00 258500.55 37137.07 269134.51 00:23:34.566 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.566 Job: Nvme9n1 ended in about 0.97 seconds with error 00:23:34.566 Verification LBA range: start 0x0 length 0x400 00:23:34.566 Nvme9n1 : 0.97 131.67 8.23 65.84 0.00 270241.00 30146.56 242920.11 00:23:34.566 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.566 Job: Nvme10n1 ended in about 0.97 seconds with error 00:23:34.566 Verification LBA range: start 0x0 length 0x400 00:23:34.566 Nvme10n1 : 0.97 132.64 8.29 66.32 0.00 261742.93 17694.72 262144.00 00:23:34.566 =================================================================================================================== 00:23:34.566 Total : 1807.96 113.00 614.60 0.00 237864.80 14527.15 269134.51 00:23:34.566 [2024-07-15 21:14:01.721114] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:34.566 [2024-07-15 21:14:01.721161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:34.566 [2024-07-15 21:14:01.721668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.566 [2024-07-15 21:14:01.721687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1029c20 with addr=10.0.0.2, port=4420 00:23:34.566 [2024-07-15 21:14:01.721698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1029c20 is same with the state(5) to be set 00:23:34.566 [2024-07-15 21:14:01.722079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.566 [2024-07-15 21:14:01.722089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1188090 with addr=10.0.0.2, port=4420 00:23:34.566 [2024-07-15 21:14:01.722096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1188090 is same with the state(5) to be set 00:23:34.566 [2024-07-15 21:14:01.722115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101a6d0 (9): Bad file descriptor 00:23:34.566 [2024-07-15 21:14:01.722127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e6150 (9): Bad file descriptor 00:23:34.566 [2024-07-15 21:14:01.722137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10498c0 (9): Bad file descriptor 00:23:34.566 [2024-07-15 21:14:01.722146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11edef0 (9): Bad file descriptor 00:23:34.566 [2024-07-15 21:14:01.722646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.566 [2024-07-15 21:14:01.722661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ee8e0 with addr=10.0.0.2, port=4420 00:23:34.566 [2024-07-15 21:14:01.722668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ee8e0 is same with the state(5) to be set 00:23:34.566 [2024-07-15 21:14:01.723041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.566 [2024-07-15 21:14:01.723051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1034dd0 with addr=10.0.0.2, port=4420 00:23:34.566 [2024-07-15 21:14:01.723059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1034dd0 is same with the state(5) to be set 00:23:34.566 [2024-07-15 21:14:01.723320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.566 [2024-07-15 21:14:01.723330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1195350 with addr=10.0.0.2, port=4420 00:23:34.566 [2024-07-15 21:14:01.723337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1195350 is same with the state(5) to be set 00:23:34.566 [2024-07-15 21:14:01.723719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.566 [2024-07-15 21:14:01.723728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1074e80 with addr=10.0.0.2, port=4420 00:23:34.566 [2024-07-15 21:14:01.723735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074e80 is same with the state(5) to be set 00:23:34.566 [2024-07-15 21:14:01.723744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1029c20 (9): Bad file descriptor 00:23:34.566 [2024-07-15 21:14:01.723754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1188090 (9): Bad file descriptor 00:23:34.566 [2024-07-15 21:14:01.723762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.566 [2024-07-15 21:14:01.723769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.566 [2024-07-15 21:14:01.723778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.566 [2024-07-15 21:14:01.723790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:34.566 [2024-07-15 21:14:01.723796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:34.566 [2024-07-15 21:14:01.723802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:34.566 [2024-07-15 21:14:01.723813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:34.566 [2024-07-15 21:14:01.723819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:34.566 [2024-07-15 21:14:01.723826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:34.566 [2024-07-15 21:14:01.723837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:34.566 [2024-07-15 21:14:01.723843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:34.566 [2024-07-15 21:14:01.723849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:34.566 [2024-07-15 21:14:01.723874] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:34.566 [2024-07-15 21:14:01.723886] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:34.566 [2024-07-15 21:14:01.723896] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:34.566 [2024-07-15 21:14:01.723906] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:34.567 [2024-07-15 21:14:01.723924] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:34.567 [2024-07-15 21:14:01.723935] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:34.567 [2024-07-15 21:14:01.724276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.567 [2024-07-15 21:14:01.724286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.567 [2024-07-15 21:14:01.724292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.567 [2024-07-15 21:14:01.724299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.567 [2024-07-15 21:14:01.724306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ee8e0 (9): Bad file descriptor 00:23:34.567 [2024-07-15 21:14:01.724315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1034dd0 (9): Bad file descriptor 00:23:34.567 [2024-07-15 21:14:01.724324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1195350 (9): Bad file descriptor 00:23:34.567 [2024-07-15 21:14:01.724333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1074e80 (9): Bad file descriptor 00:23:34.567 [2024-07-15 21:14:01.724341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:34.567 [2024-07-15 21:14:01.724347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:34.567 [2024-07-15 21:14:01.724354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:34.567 [2024-07-15 21:14:01.724364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:34.567 [2024-07-15 21:14:01.724370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:34.567 [2024-07-15 21:14:01.724376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:34.567 [2024-07-15 21:14:01.724415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.567 [2024-07-15 21:14:01.724422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.567 [2024-07-15 21:14:01.724429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:34.567 [2024-07-15 21:14:01.724435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:34.567 [2024-07-15 21:14:01.724442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:34.567 [2024-07-15 21:14:01.724451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:34.567 [2024-07-15 21:14:01.724458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:34.567 [2024-07-15 21:14:01.724464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:34.567 [2024-07-15 21:14:01.724473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:34.567 [2024-07-15 21:14:01.724480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:34.567 [2024-07-15 21:14:01.724489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:34.567 [2024-07-15 21:14:01.724499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:34.567 [2024-07-15 21:14:01.724505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:34.567 [2024-07-15 21:14:01.724512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:34.567 [2024-07-15 21:14:01.724543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.567 [2024-07-15 21:14:01.724550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.567 [2024-07-15 21:14:01.724556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.567 [2024-07-15 21:14:01.724562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.827 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:34.828 21:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:35.771 21:14:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2044122 00:23:35.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2044122) - No such process 00:23:35.771 21:14:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:35.771 21:14:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:35.771 21:14:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:35.771 21:14:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:35.771 21:14:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:35.771 21:14:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:35.771 21:14:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:35.771 21:14:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:35.771 21:14:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:35.771 21:14:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:35.771 21:14:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:35.771 21:14:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:35.771 rmmod nvme_tcp 00:23:35.771 rmmod nvme_fabrics 00:23:35.771 rmmod nvme_keyring 00:23:35.771 21:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:35.771 21:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:35.771 21:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:35.771 21:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:35.771 21:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:35.771 21:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:35.771 21:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:35.771 21:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:35.771 21:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:35.771 21:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.771 21:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.771 21:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.316 21:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:38.316 00:23:38.316 real 0m7.572s 00:23:38.316 user 0m17.711s 00:23:38.316 sys 0m1.226s 00:23:38.316 21:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:38.316 21:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:38.316 ************************************ 00:23:38.316 END TEST nvmf_shutdown_tc3 00:23:38.316 ************************************ 00:23:38.316 21:14:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:38.316 21:14:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:38.316 00:23:38.316 real 0m32.716s 00:23:38.316 user 1m14.990s 00:23:38.316 sys 0m9.661s 00:23:38.316 21:14:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:38.316 21:14:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:38.316 ************************************ 00:23:38.316 END TEST nvmf_shutdown 00:23:38.316 ************************************ 00:23:38.316 21:14:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:38.316 21:14:05 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:38.316 21:14:05 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:38.316 21:14:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:38.316 21:14:05 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:38.316 21:14:05 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:38.316 21:14:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:38.316 21:14:05 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:38.316 21:14:05 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:38.316 21:14:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:38.316 21:14:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:38.316 21:14:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:38.316 ************************************ 00:23:38.316 START TEST nvmf_multicontroller 00:23:38.316 ************************************ 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:38.316 * Looking for test storage... 00:23:38.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:38.316 21:14:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:38.317 21:14:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:38.317 21:14:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:38.317 21:14:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:38.317 21:14:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:38.317 21:14:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:38.317 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:38.317 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.317 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:38.317 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:38.317 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:38.317 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.317 21:14:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.317 21:14:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.317 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:38.317 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:38.317 21:14:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:38.317 21:14:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:46.456 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:46.456 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:46.456 Found net devices under 0000:31:00.0: cvl_0_0 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:46.456 Found net devices under 0000:31:00.1: cvl_0_1 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:46.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:23:46.456 00:23:46.456 --- 10.0.0.2 ping statistics --- 00:23:46.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.456 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:23:46.456 00:23:46.456 --- 10.0.0.1 ping statistics --- 00:23:46.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.456 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2049568 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2049568 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2049568 ']' 00:23:46.456 21:14:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.457 21:14:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.457 21:14:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.457 21:14:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.457 21:14:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:46.457 [2024-07-15 21:14:13.455979] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:23:46.457 [2024-07-15 21:14:13.456026] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.457 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.457 [2024-07-15 21:14:13.545665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:46.457 [2024-07-15 21:14:13.610003] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.457 [2024-07-15 21:14:13.610040] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.457 [2024-07-15 21:14:13.610048] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.457 [2024-07-15 21:14:13.610055] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.457 [2024-07-15 21:14:13.610060] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.457 [2024-07-15 21:14:13.610163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.457 [2024-07-15 21:14:13.610319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:46.457 [2024-07-15 21:14:13.610481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.027 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:47.027 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:47.027 21:14:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:47.027 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:47.027 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:47.027 21:14:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.027 21:14:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:47.027 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.027 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:47.027 [2024-07-15 21:14:14.277538] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.027 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.027 21:14:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:47.027 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.027 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:47.027 Malloc0 00:23:47.027 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.027 21:14:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:47.027 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.027 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:47.289 [2024-07-15 21:14:14.343612] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:47.289 [2024-07-15 21:14:14.355543] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:47.289 Malloc1 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2049883 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2049883 /var/tmp/bdevperf.sock 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2049883 ']' 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:47.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:47.289 21:14:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.230 NVMe0n1 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.230 1 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.230 request: 00:23:48.230 { 00:23:48.230 "name": "NVMe0", 00:23:48.230 "trtype": "tcp", 00:23:48.230 "traddr": "10.0.0.2", 00:23:48.230 "adrfam": "ipv4", 00:23:48.230 "trsvcid": "4420", 00:23:48.230 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.230 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:48.230 "hostaddr": "10.0.0.2", 00:23:48.230 "hostsvcid": "60000", 00:23:48.230 "prchk_reftag": false, 00:23:48.230 "prchk_guard": false, 00:23:48.230 "hdgst": false, 00:23:48.230 "ddgst": false, 00:23:48.230 "method": "bdev_nvme_attach_controller", 00:23:48.230 "req_id": 1 00:23:48.230 } 00:23:48.230 Got JSON-RPC error response 00:23:48.230 response: 00:23:48.230 { 00:23:48.230 "code": -114, 00:23:48.230 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:48.230 } 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.230 request: 00:23:48.230 { 00:23:48.230 "name": "NVMe0", 00:23:48.230 "trtype": "tcp", 00:23:48.230 "traddr": "10.0.0.2", 00:23:48.230 "adrfam": "ipv4", 00:23:48.230 "trsvcid": "4420", 00:23:48.230 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:48.230 "hostaddr": "10.0.0.2", 00:23:48.230 "hostsvcid": "60000", 00:23:48.230 "prchk_reftag": false, 00:23:48.230 "prchk_guard": false, 00:23:48.230 "hdgst": false, 00:23:48.230 "ddgst": false, 00:23:48.230 "method": "bdev_nvme_attach_controller", 00:23:48.230 "req_id": 1 00:23:48.230 } 00:23:48.230 Got JSON-RPC error response 00:23:48.230 response: 00:23:48.230 { 00:23:48.230 "code": -114, 00:23:48.230 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:48.230 } 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.230 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.230 request: 00:23:48.230 { 00:23:48.230 "name": "NVMe0", 00:23:48.230 "trtype": "tcp", 00:23:48.230 "traddr": "10.0.0.2", 00:23:48.230 "adrfam": "ipv4", 00:23:48.230 "trsvcid": "4420", 00:23:48.230 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.230 "hostaddr": "10.0.0.2", 00:23:48.230 "hostsvcid": "60000", 00:23:48.230 "prchk_reftag": false, 00:23:48.230 "prchk_guard": false, 00:23:48.230 "hdgst": false, 00:23:48.230 "ddgst": false, 00:23:48.230 "multipath": "disable", 00:23:48.230 "method": "bdev_nvme_attach_controller", 00:23:48.230 "req_id": 1 00:23:48.230 } 00:23:48.230 Got JSON-RPC error response 00:23:48.230 response: 00:23:48.230 { 00:23:48.230 "code": -114, 00:23:48.230 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:48.230 } 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.492 request: 00:23:48.492 { 00:23:48.492 "name": "NVMe0", 00:23:48.492 "trtype": "tcp", 00:23:48.492 "traddr": "10.0.0.2", 00:23:48.492 "adrfam": "ipv4", 00:23:48.492 "trsvcid": "4420", 00:23:48.492 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.492 "hostaddr": "10.0.0.2", 00:23:48.492 "hostsvcid": "60000", 00:23:48.492 "prchk_reftag": false, 00:23:48.492 "prchk_guard": false, 00:23:48.492 "hdgst": false, 00:23:48.492 "ddgst": false, 00:23:48.492 "multipath": "failover", 00:23:48.492 "method": "bdev_nvme_attach_controller", 00:23:48.492 "req_id": 1 00:23:48.492 } 00:23:48.492 Got JSON-RPC error response 00:23:48.492 response: 00:23:48.492 { 00:23:48.492 "code": -114, 00:23:48.492 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:48.492 } 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.492 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.492 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.752 00:23:48.752 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.752 21:14:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:48.752 21:14:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:48.752 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.752 21:14:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.752 21:14:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.752 21:14:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:48.752 21:14:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:50.186 0 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2049883 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2049883 ']' 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2049883 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2049883 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2049883' 00:23:50.186 killing process with pid 2049883 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2049883 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2049883 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:50.186 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:50.186 [2024-07-15 21:14:14.473567] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:23:50.186 [2024-07-15 21:14:14.473619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2049883 ] 00:23:50.186 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.186 [2024-07-15 21:14:14.542854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.186 [2024-07-15 21:14:14.607786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.186 [2024-07-15 21:14:15.979279] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name d9a6df08-0cc4-4612-8e43-950611953642 already exists 00:23:50.186 [2024-07-15 21:14:15.979312] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:d9a6df08-0cc4-4612-8e43-950611953642 alias for bdev NVMe1n1 00:23:50.186 [2024-07-15 21:14:15.979320] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:50.186 Running I/O for 1 seconds... 00:23:50.186 00:23:50.186 Latency(us) 00:23:50.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.186 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:50.186 NVMe0n1 : 1.00 27911.60 109.03 0.00 0.00 4575.65 2798.93 10758.83 00:23:50.186 =================================================================================================================== 00:23:50.186 Total : 27911.60 109.03 0.00 0.00 4575.65 2798.93 10758.83 00:23:50.186 Received shutdown signal, test time was about 1.000000 seconds 00:23:50.186 00:23:50.186 Latency(us) 00:23:50.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.186 =================================================================================================================== 00:23:50.186 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:50.186 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:50.186 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:50.187 21:14:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:50.187 21:14:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:50.187 21:14:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:50.187 21:14:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:50.187 21:14:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:50.187 21:14:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:50.187 21:14:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:50.187 rmmod nvme_tcp 00:23:50.187 rmmod nvme_fabrics 00:23:50.187 rmmod nvme_keyring 00:23:50.187 21:14:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:50.187 21:14:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:50.187 21:14:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:50.187 21:14:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2049568 ']' 00:23:50.187 21:14:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2049568 00:23:50.187 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2049568 ']' 00:23:50.187 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2049568 00:23:50.187 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:50.187 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:50.187 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2049568 00:23:50.448 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:50.448 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:50.448 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2049568' 00:23:50.448 killing process with pid 2049568 00:23:50.448 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2049568 00:23:50.448 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2049568 00:23:50.448 21:14:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:50.448 21:14:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:50.448 21:14:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:50.448 21:14:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:50.448 21:14:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:50.448 21:14:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.448 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.448 21:14:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.998 21:14:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:52.998 00:23:52.998 real 0m14.469s 00:23:52.998 user 0m17.568s 00:23:52.998 sys 0m6.552s 00:23:52.998 21:14:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:52.998 21:14:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.998 ************************************ 00:23:52.998 END TEST nvmf_multicontroller 00:23:52.998 ************************************ 00:23:52.998 21:14:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:52.998 21:14:19 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:52.998 21:14:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:52.998 21:14:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:52.998 21:14:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:52.998 ************************************ 00:23:52.998 START TEST nvmf_aer 00:23:52.998 ************************************ 00:23:52.998 21:14:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:52.998 * Looking for test storage... 00:23:52.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:52.998 21:14:19 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.998 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:52.998 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.998 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.998 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.998 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.998 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.998 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.998 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.998 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.998 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.998 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:52.999 21:14:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:01.137 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:01.137 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:01.137 Found net devices under 0000:31:00.0: cvl_0_0 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:01.137 Found net devices under 0000:31:00.1: cvl_0_1 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.137 21:14:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.137 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.137 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.137 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:01.137 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.137 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.137 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.137 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:01.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:24:01.137 00:24:01.137 --- 10.0.0.2 ping statistics --- 00:24:01.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.137 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:24:01.137 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:24:01.137 00:24:01.137 --- 10.0.0.1 ping statistics --- 00:24:01.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.137 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:24:01.137 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.137 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:01.137 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:01.137 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.137 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:01.138 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:01.138 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.138 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:01.138 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:01.138 21:14:28 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:01.138 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:01.138 21:14:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:01.138 21:14:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.138 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2055224 00:24:01.138 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2055224 00:24:01.138 21:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:01.138 21:14:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2055224 ']' 00:24:01.138 21:14:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.138 21:14:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.138 21:14:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.138 21:14:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.138 21:14:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.138 [2024-07-15 21:14:28.281597] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:24:01.138 [2024-07-15 21:14:28.281659] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.138 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.138 [2024-07-15 21:14:28.358763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:01.138 [2024-07-15 21:14:28.424646] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.138 [2024-07-15 21:14:28.424683] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.138 [2024-07-15 21:14:28.424690] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.138 [2024-07-15 21:14:28.424700] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.138 [2024-07-15 21:14:28.424705] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.398 [2024-07-15 21:14:28.426905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.398 [2024-07-15 21:14:28.427063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.398 [2024-07-15 21:14:28.427216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.398 [2024-07-15 21:14:28.427217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.968 [2024-07-15 21:14:29.097871] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.968 Malloc0 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.968 [2024-07-15 21:14:29.157276] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.968 [ 00:24:01.968 { 00:24:01.968 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:01.968 "subtype": "Discovery", 00:24:01.968 "listen_addresses": [], 00:24:01.968 "allow_any_host": true, 00:24:01.968 "hosts": [] 00:24:01.968 }, 00:24:01.968 { 00:24:01.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.968 "subtype": "NVMe", 00:24:01.968 "listen_addresses": [ 00:24:01.968 { 00:24:01.968 "trtype": "TCP", 00:24:01.968 "adrfam": "IPv4", 00:24:01.968 "traddr": "10.0.0.2", 00:24:01.968 "trsvcid": "4420" 00:24:01.968 } 00:24:01.968 ], 00:24:01.968 "allow_any_host": true, 00:24:01.968 "hosts": [], 00:24:01.968 "serial_number": "SPDK00000000000001", 00:24:01.968 "model_number": "SPDK bdev Controller", 00:24:01.968 "max_namespaces": 2, 00:24:01.968 "min_cntlid": 1, 00:24:01.968 "max_cntlid": 65519, 00:24:01.968 "namespaces": [ 00:24:01.968 { 00:24:01.968 "nsid": 1, 00:24:01.968 "bdev_name": "Malloc0", 00:24:01.968 "name": "Malloc0", 00:24:01.968 "nguid": "F4FC5ACFD93D4BE68452763B56E492CA", 00:24:01.968 "uuid": "f4fc5acf-d93d-4be6-8452-763b56e492ca" 00:24:01.968 } 00:24:01.968 ] 00:24:01.968 } 00:24:01.968 ] 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2055283 00:24:01.968 21:14:29 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:01.969 21:14:29 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:01.969 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:01.969 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:01.969 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:01.969 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:01.969 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:01.969 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.229 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:02.229 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:02.229 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:02.229 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:02.229 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:02.229 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:02.229 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:02.229 21:14:29 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:02.229 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.229 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:02.229 Malloc1 00:24:02.229 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.229 21:14:29 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:02.229 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.229 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:02.229 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.229 21:14:29 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:02.229 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.229 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:02.229 Asynchronous Event Request test 00:24:02.229 Attaching to 10.0.0.2 00:24:02.229 Attached to 10.0.0.2 00:24:02.229 Registering asynchronous event callbacks... 00:24:02.229 Starting namespace attribute notice tests for all controllers... 00:24:02.229 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:02.229 aer_cb - Changed Namespace 00:24:02.229 Cleaning up... 00:24:02.229 [ 00:24:02.229 { 00:24:02.229 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:02.229 "subtype": "Discovery", 00:24:02.229 "listen_addresses": [], 00:24:02.229 "allow_any_host": true, 00:24:02.229 "hosts": [] 00:24:02.229 }, 00:24:02.229 { 00:24:02.229 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.229 "subtype": "NVMe", 00:24:02.229 "listen_addresses": [ 00:24:02.229 { 00:24:02.229 "trtype": "TCP", 00:24:02.229 "adrfam": "IPv4", 00:24:02.229 "traddr": "10.0.0.2", 00:24:02.229 "trsvcid": "4420" 00:24:02.229 } 00:24:02.229 ], 00:24:02.229 "allow_any_host": true, 00:24:02.229 "hosts": [], 00:24:02.229 "serial_number": "SPDK00000000000001", 00:24:02.229 "model_number": "SPDK bdev Controller", 00:24:02.229 "max_namespaces": 2, 00:24:02.229 "min_cntlid": 1, 00:24:02.229 "max_cntlid": 65519, 00:24:02.229 "namespaces": [ 00:24:02.229 { 00:24:02.229 "nsid": 1, 00:24:02.229 "bdev_name": "Malloc0", 00:24:02.229 "name": "Malloc0", 00:24:02.229 "nguid": "F4FC5ACFD93D4BE68452763B56E492CA", 00:24:02.229 "uuid": "f4fc5acf-d93d-4be6-8452-763b56e492ca" 00:24:02.229 }, 00:24:02.229 { 00:24:02.229 "nsid": 2, 00:24:02.229 "bdev_name": "Malloc1", 00:24:02.229 "name": "Malloc1", 00:24:02.230 "nguid": "483ED7BCA8324AB188D9BD4BE9324100", 00:24:02.230 "uuid": "483ed7bc-a832-4ab1-88d9-bd4be9324100" 00:24:02.230 } 00:24:02.230 ] 00:24:02.230 } 00:24:02.230 ] 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2055283 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:02.230 21:14:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:02.491 rmmod nvme_tcp 00:24:02.491 rmmod nvme_fabrics 00:24:02.491 rmmod nvme_keyring 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2055224 ']' 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2055224 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2055224 ']' 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2055224 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2055224 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2055224' 00:24:02.491 killing process with pid 2055224 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2055224 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2055224 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.491 21:14:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.040 21:14:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:05.040 00:24:05.040 real 0m12.048s 00:24:05.040 user 0m7.685s 00:24:05.040 sys 0m6.563s 00:24:05.040 21:14:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:05.040 21:14:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:05.040 ************************************ 00:24:05.040 END TEST nvmf_aer 00:24:05.040 ************************************ 00:24:05.040 21:14:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:05.040 21:14:31 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:05.040 21:14:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:05.040 21:14:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:05.040 21:14:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:05.040 ************************************ 00:24:05.040 START TEST nvmf_async_init 00:24:05.040 ************************************ 00:24:05.040 21:14:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:05.040 * Looking for test storage... 00:24:05.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:05.040 21:14:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:05.040 21:14:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=389de56cbd1d42458e21dcd986df31b0 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:05.040 21:14:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:13.183 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:13.184 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:13.184 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:13.184 Found net devices under 0000:31:00.0: cvl_0_0 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:13.184 Found net devices under 0000:31:00.1: cvl_0_1 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:13.184 21:14:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:13.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:13.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:24:13.184 00:24:13.184 --- 10.0.0.2 ping statistics --- 00:24:13.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.184 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:13.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:13.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:24:13.184 00:24:13.184 --- 10.0.0.1 ping statistics --- 00:24:13.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.184 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2060050 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2060050 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2060050 ']' 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:13.184 21:14:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.184 [2024-07-15 21:14:40.351283] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:24:13.184 [2024-07-15 21:14:40.351357] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.184 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.184 [2024-07-15 21:14:40.430104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.445 [2024-07-15 21:14:40.503986] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.445 [2024-07-15 21:14:40.504023] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.445 [2024-07-15 21:14:40.504031] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.445 [2024-07-15 21:14:40.504037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.445 [2024-07-15 21:14:40.504043] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.445 [2024-07-15 21:14:40.504066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.016 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:14.016 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:24:14.016 21:14:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:14.016 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:14.016 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.016 21:14:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.017 [2024-07-15 21:14:41.159094] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.017 null0 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 389de56cbd1d42458e21dcd986df31b0 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.017 [2024-07-15 21:14:41.215344] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.017 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.277 nvme0n1 00:24:14.277 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.277 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:14.277 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.277 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.277 [ 00:24:14.277 { 00:24:14.277 "name": "nvme0n1", 00:24:14.277 "aliases": [ 00:24:14.277 "389de56c-bd1d-4245-8e21-dcd986df31b0" 00:24:14.277 ], 00:24:14.277 "product_name": "NVMe disk", 00:24:14.277 "block_size": 512, 00:24:14.277 "num_blocks": 2097152, 00:24:14.277 "uuid": "389de56c-bd1d-4245-8e21-dcd986df31b0", 00:24:14.277 "assigned_rate_limits": { 00:24:14.277 "rw_ios_per_sec": 0, 00:24:14.277 "rw_mbytes_per_sec": 0, 00:24:14.277 "r_mbytes_per_sec": 0, 00:24:14.277 "w_mbytes_per_sec": 0 00:24:14.277 }, 00:24:14.277 "claimed": false, 00:24:14.277 "zoned": false, 00:24:14.277 "supported_io_types": { 00:24:14.277 "read": true, 00:24:14.277 "write": true, 00:24:14.277 "unmap": false, 00:24:14.277 "flush": true, 00:24:14.277 "reset": true, 00:24:14.277 "nvme_admin": true, 00:24:14.277 "nvme_io": true, 00:24:14.277 "nvme_io_md": false, 00:24:14.277 "write_zeroes": true, 00:24:14.277 "zcopy": false, 00:24:14.277 "get_zone_info": false, 00:24:14.277 "zone_management": false, 00:24:14.277 "zone_append": false, 00:24:14.277 "compare": true, 00:24:14.277 "compare_and_write": true, 00:24:14.277 "abort": true, 00:24:14.277 "seek_hole": false, 00:24:14.277 "seek_data": false, 00:24:14.277 "copy": true, 00:24:14.277 "nvme_iov_md": false 00:24:14.277 }, 00:24:14.277 "memory_domains": [ 00:24:14.277 { 00:24:14.277 "dma_device_id": "system", 00:24:14.277 "dma_device_type": 1 00:24:14.277 } 00:24:14.277 ], 00:24:14.277 "driver_specific": { 00:24:14.277 "nvme": [ 00:24:14.277 { 00:24:14.277 "trid": { 00:24:14.277 "trtype": "TCP", 00:24:14.277 "adrfam": "IPv4", 00:24:14.277 "traddr": "10.0.0.2", 00:24:14.277 "trsvcid": "4420", 00:24:14.277 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:14.277 }, 00:24:14.277 "ctrlr_data": { 00:24:14.277 "cntlid": 1, 00:24:14.277 "vendor_id": "0x8086", 00:24:14.277 "model_number": "SPDK bdev Controller", 00:24:14.277 "serial_number": "00000000000000000000", 00:24:14.277 "firmware_revision": "24.09", 00:24:14.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:14.277 "oacs": { 00:24:14.277 "security": 0, 00:24:14.277 "format": 0, 00:24:14.277 "firmware": 0, 00:24:14.277 "ns_manage": 0 00:24:14.277 }, 00:24:14.277 "multi_ctrlr": true, 00:24:14.277 "ana_reporting": false 00:24:14.277 }, 00:24:14.277 "vs": { 00:24:14.277 "nvme_version": "1.3" 00:24:14.277 }, 00:24:14.277 "ns_data": { 00:24:14.277 "id": 1, 00:24:14.277 "can_share": true 00:24:14.277 } 00:24:14.277 } 00:24:14.277 ], 00:24:14.278 "mp_policy": "active_passive" 00:24:14.278 } 00:24:14.278 } 00:24:14.278 ] 00:24:14.278 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.278 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:14.278 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.278 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.278 [2024-07-15 21:14:41.484069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:14.278 [2024-07-15 21:14:41.484130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb3610 (9): Bad file descriptor 00:24:14.538 [2024-07-15 21:14:41.616329] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.539 [ 00:24:14.539 { 00:24:14.539 "name": "nvme0n1", 00:24:14.539 "aliases": [ 00:24:14.539 "389de56c-bd1d-4245-8e21-dcd986df31b0" 00:24:14.539 ], 00:24:14.539 "product_name": "NVMe disk", 00:24:14.539 "block_size": 512, 00:24:14.539 "num_blocks": 2097152, 00:24:14.539 "uuid": "389de56c-bd1d-4245-8e21-dcd986df31b0", 00:24:14.539 "assigned_rate_limits": { 00:24:14.539 "rw_ios_per_sec": 0, 00:24:14.539 "rw_mbytes_per_sec": 0, 00:24:14.539 "r_mbytes_per_sec": 0, 00:24:14.539 "w_mbytes_per_sec": 0 00:24:14.539 }, 00:24:14.539 "claimed": false, 00:24:14.539 "zoned": false, 00:24:14.539 "supported_io_types": { 00:24:14.539 "read": true, 00:24:14.539 "write": true, 00:24:14.539 "unmap": false, 00:24:14.539 "flush": true, 00:24:14.539 "reset": true, 00:24:14.539 "nvme_admin": true, 00:24:14.539 "nvme_io": true, 00:24:14.539 "nvme_io_md": false, 00:24:14.539 "write_zeroes": true, 00:24:14.539 "zcopy": false, 00:24:14.539 "get_zone_info": false, 00:24:14.539 "zone_management": false, 00:24:14.539 "zone_append": false, 00:24:14.539 "compare": true, 00:24:14.539 "compare_and_write": true, 00:24:14.539 "abort": true, 00:24:14.539 "seek_hole": false, 00:24:14.539 "seek_data": false, 00:24:14.539 "copy": true, 00:24:14.539 "nvme_iov_md": false 00:24:14.539 }, 00:24:14.539 "memory_domains": [ 00:24:14.539 { 00:24:14.539 "dma_device_id": "system", 00:24:14.539 "dma_device_type": 1 00:24:14.539 } 00:24:14.539 ], 00:24:14.539 "driver_specific": { 00:24:14.539 "nvme": [ 00:24:14.539 { 00:24:14.539 "trid": { 00:24:14.539 "trtype": "TCP", 00:24:14.539 "adrfam": "IPv4", 00:24:14.539 "traddr": "10.0.0.2", 00:24:14.539 "trsvcid": "4420", 00:24:14.539 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:14.539 }, 00:24:14.539 "ctrlr_data": { 00:24:14.539 "cntlid": 2, 00:24:14.539 "vendor_id": "0x8086", 00:24:14.539 "model_number": "SPDK bdev Controller", 00:24:14.539 "serial_number": "00000000000000000000", 00:24:14.539 "firmware_revision": "24.09", 00:24:14.539 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:14.539 "oacs": { 00:24:14.539 "security": 0, 00:24:14.539 "format": 0, 00:24:14.539 "firmware": 0, 00:24:14.539 "ns_manage": 0 00:24:14.539 }, 00:24:14.539 "multi_ctrlr": true, 00:24:14.539 "ana_reporting": false 00:24:14.539 }, 00:24:14.539 "vs": { 00:24:14.539 "nvme_version": "1.3" 00:24:14.539 }, 00:24:14.539 "ns_data": { 00:24:14.539 "id": 1, 00:24:14.539 "can_share": true 00:24:14.539 } 00:24:14.539 } 00:24:14.539 ], 00:24:14.539 "mp_policy": "active_passive" 00:24:14.539 } 00:24:14.539 } 00:24:14.539 ] 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.lmfX1OxJZp 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.lmfX1OxJZp 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.539 [2024-07-15 21:14:41.680684] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:14.539 [2024-07-15 21:14:41.680792] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lmfX1OxJZp 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.539 [2024-07-15 21:14:41.692709] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lmfX1OxJZp 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.539 [2024-07-15 21:14:41.704760] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:14.539 [2024-07-15 21:14:41.704797] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:14.539 nvme0n1 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.539 [ 00:24:14.539 { 00:24:14.539 "name": "nvme0n1", 00:24:14.539 "aliases": [ 00:24:14.539 "389de56c-bd1d-4245-8e21-dcd986df31b0" 00:24:14.539 ], 00:24:14.539 "product_name": "NVMe disk", 00:24:14.539 "block_size": 512, 00:24:14.539 "num_blocks": 2097152, 00:24:14.539 "uuid": "389de56c-bd1d-4245-8e21-dcd986df31b0", 00:24:14.539 "assigned_rate_limits": { 00:24:14.539 "rw_ios_per_sec": 0, 00:24:14.539 "rw_mbytes_per_sec": 0, 00:24:14.539 "r_mbytes_per_sec": 0, 00:24:14.539 "w_mbytes_per_sec": 0 00:24:14.539 }, 00:24:14.539 "claimed": false, 00:24:14.539 "zoned": false, 00:24:14.539 "supported_io_types": { 00:24:14.539 "read": true, 00:24:14.539 "write": true, 00:24:14.539 "unmap": false, 00:24:14.539 "flush": true, 00:24:14.539 "reset": true, 00:24:14.539 "nvme_admin": true, 00:24:14.539 "nvme_io": true, 00:24:14.539 "nvme_io_md": false, 00:24:14.539 "write_zeroes": true, 00:24:14.539 "zcopy": false, 00:24:14.539 "get_zone_info": false, 00:24:14.539 "zone_management": false, 00:24:14.539 "zone_append": false, 00:24:14.539 "compare": true, 00:24:14.539 "compare_and_write": true, 00:24:14.539 "abort": true, 00:24:14.539 "seek_hole": false, 00:24:14.539 "seek_data": false, 00:24:14.539 "copy": true, 00:24:14.539 "nvme_iov_md": false 00:24:14.539 }, 00:24:14.539 "memory_domains": [ 00:24:14.539 { 00:24:14.539 "dma_device_id": "system", 00:24:14.539 "dma_device_type": 1 00:24:14.539 } 00:24:14.539 ], 00:24:14.539 "driver_specific": { 00:24:14.539 "nvme": [ 00:24:14.539 { 00:24:14.539 "trid": { 00:24:14.539 "trtype": "TCP", 00:24:14.539 "adrfam": "IPv4", 00:24:14.539 "traddr": "10.0.0.2", 00:24:14.539 "trsvcid": "4421", 00:24:14.539 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:14.539 }, 00:24:14.539 "ctrlr_data": { 00:24:14.539 "cntlid": 3, 00:24:14.539 "vendor_id": "0x8086", 00:24:14.539 "model_number": "SPDK bdev Controller", 00:24:14.539 "serial_number": "00000000000000000000", 00:24:14.539 "firmware_revision": "24.09", 00:24:14.539 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:14.539 "oacs": { 00:24:14.539 "security": 0, 00:24:14.539 "format": 0, 00:24:14.539 "firmware": 0, 00:24:14.539 "ns_manage": 0 00:24:14.539 }, 00:24:14.539 "multi_ctrlr": true, 00:24:14.539 "ana_reporting": false 00:24:14.539 }, 00:24:14.539 "vs": { 00:24:14.539 "nvme_version": "1.3" 00:24:14.539 }, 00:24:14.539 "ns_data": { 00:24:14.539 "id": 1, 00:24:14.539 "can_share": true 00:24:14.539 } 00:24:14.539 } 00:24:14.539 ], 00:24:14.539 "mp_policy": "active_passive" 00:24:14.539 } 00:24:14.539 } 00:24:14.539 ] 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.lmfX1OxJZp 00:24:14.539 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:14.540 21:14:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:14.540 21:14:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:14.540 21:14:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:14.540 21:14:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:14.540 21:14:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:14.540 21:14:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:14.540 21:14:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:14.540 rmmod nvme_tcp 00:24:14.801 rmmod nvme_fabrics 00:24:14.801 rmmod nvme_keyring 00:24:14.801 21:14:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:14.801 21:14:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:14.801 21:14:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:14.801 21:14:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2060050 ']' 00:24:14.801 21:14:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2060050 00:24:14.801 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2060050 ']' 00:24:14.801 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2060050 00:24:14.801 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:24:14.801 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:14.801 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2060050 00:24:14.801 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:14.801 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:14.801 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2060050' 00:24:14.801 killing process with pid 2060050 00:24:14.801 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2060050 00:24:14.801 [2024-07-15 21:14:41.934690] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:14.801 [2024-07-15 21:14:41.934716] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:14.801 21:14:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2060050 00:24:14.801 21:14:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:14.801 21:14:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:14.801 21:14:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:14.801 21:14:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:14.801 21:14:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:14.801 21:14:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.801 21:14:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.801 21:14:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.346 21:14:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:17.346 00:24:17.346 real 0m12.223s 00:24:17.346 user 0m4.266s 00:24:17.346 sys 0m6.381s 00:24:17.346 21:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:17.346 21:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.346 ************************************ 00:24:17.346 END TEST nvmf_async_init 00:24:17.346 ************************************ 00:24:17.346 21:14:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:17.346 21:14:44 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:17.346 21:14:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:17.346 21:14:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:17.346 21:14:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:17.346 ************************************ 00:24:17.346 START TEST dma 00:24:17.346 ************************************ 00:24:17.346 21:14:44 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:17.346 * Looking for test storage... 00:24:17.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:17.346 21:14:44 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.346 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:17.346 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.346 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.346 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.346 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.346 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.346 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.346 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.346 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.346 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.346 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.346 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:17.346 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:17.346 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.346 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.346 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.346 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.346 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.346 21:14:44 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.346 21:14:44 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.346 21:14:44 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.347 21:14:44 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.347 21:14:44 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.347 21:14:44 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.347 21:14:44 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:17.347 21:14:44 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.347 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:17.347 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:17.347 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:17.347 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.347 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.347 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.347 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:17.347 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:17.347 21:14:44 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:17.347 21:14:44 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:17.347 21:14:44 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:17.347 00:24:17.347 real 0m0.136s 00:24:17.347 user 0m0.071s 00:24:17.347 sys 0m0.073s 00:24:17.347 21:14:44 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:17.347 21:14:44 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:17.347 ************************************ 00:24:17.347 END TEST dma 00:24:17.347 ************************************ 00:24:17.347 21:14:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:17.347 21:14:44 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:17.347 21:14:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:17.347 21:14:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:17.347 21:14:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:17.347 ************************************ 00:24:17.347 START TEST nvmf_identify 00:24:17.347 ************************************ 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:17.347 * Looking for test storage... 00:24:17.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:17.347 21:14:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:25.486 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:25.486 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:25.486 Found net devices under 0000:31:00.0: cvl_0_0 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:25.486 Found net devices under 0000:31:00.1: cvl_0_1 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:25.486 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:25.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:25.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.721 ms 00:24:25.747 00:24:25.747 --- 10.0.0.2 ping statistics --- 00:24:25.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.747 rtt min/avg/max/mdev = 0.721/0.721/0.721/0.000 ms 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:25.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:25.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.467 ms 00:24:25.747 00:24:25.747 --- 10.0.0.1 ping statistics --- 00:24:25.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.747 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2065168 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2065168 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2065168 ']' 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:25.747 21:14:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:25.747 [2024-07-15 21:14:52.916850] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:24:25.747 [2024-07-15 21:14:52.916917] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.747 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.747 [2024-07-15 21:14:52.997853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:26.008 [2024-07-15 21:14:53.074143] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.008 [2024-07-15 21:14:53.074179] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.008 [2024-07-15 21:14:53.074186] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.008 [2024-07-15 21:14:53.074193] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.008 [2024-07-15 21:14:53.074198] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.008 [2024-07-15 21:14:53.074278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.008 [2024-07-15 21:14:53.074345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.008 [2024-07-15 21:14:53.074853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:26.008 [2024-07-15 21:14:53.074854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.582 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:26.582 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:24:26.582 21:14:53 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:26.582 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.582 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:26.582 [2024-07-15 21:14:53.698747] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.582 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.582 21:14:53 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:26.582 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:26.582 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:26.582 21:14:53 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:26.582 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.582 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:26.582 Malloc0 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:26.583 [2024-07-15 21:14:53.782212] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:26.583 [ 00:24:26.583 { 00:24:26.583 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:26.583 "subtype": "Discovery", 00:24:26.583 "listen_addresses": [ 00:24:26.583 { 00:24:26.583 "trtype": "TCP", 00:24:26.583 "adrfam": "IPv4", 00:24:26.583 "traddr": "10.0.0.2", 00:24:26.583 "trsvcid": "4420" 00:24:26.583 } 00:24:26.583 ], 00:24:26.583 "allow_any_host": true, 00:24:26.583 "hosts": [] 00:24:26.583 }, 00:24:26.583 { 00:24:26.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.583 "subtype": "NVMe", 00:24:26.583 "listen_addresses": [ 00:24:26.583 { 00:24:26.583 "trtype": "TCP", 00:24:26.583 "adrfam": "IPv4", 00:24:26.583 "traddr": "10.0.0.2", 00:24:26.583 "trsvcid": "4420" 00:24:26.583 } 00:24:26.583 ], 00:24:26.583 "allow_any_host": true, 00:24:26.583 "hosts": [], 00:24:26.583 "serial_number": "SPDK00000000000001", 00:24:26.583 "model_number": "SPDK bdev Controller", 00:24:26.583 "max_namespaces": 32, 00:24:26.583 "min_cntlid": 1, 00:24:26.583 "max_cntlid": 65519, 00:24:26.583 "namespaces": [ 00:24:26.583 { 00:24:26.583 "nsid": 1, 00:24:26.583 "bdev_name": "Malloc0", 00:24:26.583 "name": "Malloc0", 00:24:26.583 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:26.583 "eui64": "ABCDEF0123456789", 00:24:26.583 "uuid": "fb08e134-9ff8-433c-99a2-82333bcdfabb" 00:24:26.583 } 00:24:26.583 ] 00:24:26.583 } 00:24:26.583 ] 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.583 21:14:53 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:26.583 [2024-07-15 21:14:53.837036] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:24:26.583 [2024-07-15 21:14:53.837105] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2065383 ] 00:24:26.583 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.583 [2024-07-15 21:14:53.870928] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:26.583 [2024-07-15 21:14:53.870983] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:26.583 [2024-07-15 21:14:53.870988] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:26.583 [2024-07-15 21:14:53.870999] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:26.583 [2024-07-15 21:14:53.871006] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:26.849 [2024-07-15 21:14:53.874261] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:26.849 [2024-07-15 21:14:53.874291] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9eeec0 0 00:24:26.849 [2024-07-15 21:14:53.882239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:26.849 [2024-07-15 21:14:53.882250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:26.849 [2024-07-15 21:14:53.882254] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:26.849 [2024-07-15 21:14:53.882257] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:26.849 [2024-07-15 21:14:53.882293] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.882300] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.882304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9eeec0) 00:24:26.849 [2024-07-15 21:14:53.882317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:26.849 [2024-07-15 21:14:53.882332] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa71fc0, cid 0, qid 0 00:24:26.849 [2024-07-15 21:14:53.890242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.849 [2024-07-15 21:14:53.890250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.849 [2024-07-15 21:14:53.890254] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.890262] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa71fc0) on tqpair=0x9eeec0 00:24:26.849 [2024-07-15 21:14:53.890276] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:26.849 [2024-07-15 21:14:53.890283] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:26.849 [2024-07-15 21:14:53.890289] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:26.849 [2024-07-15 21:14:53.890301] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.890305] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.890309] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9eeec0) 00:24:26.849 [2024-07-15 21:14:53.890316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.849 [2024-07-15 21:14:53.890329] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa71fc0, cid 0, qid 0 00:24:26.849 [2024-07-15 21:14:53.890546] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.849 [2024-07-15 21:14:53.890553] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.849 [2024-07-15 21:14:53.890556] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.890560] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa71fc0) on tqpair=0x9eeec0 00:24:26.849 [2024-07-15 21:14:53.890565] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:26.849 [2024-07-15 21:14:53.890572] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:26.849 [2024-07-15 21:14:53.890579] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.890582] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.890586] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9eeec0) 00:24:26.849 [2024-07-15 21:14:53.890593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.849 [2024-07-15 21:14:53.890603] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa71fc0, cid 0, qid 0 00:24:26.849 [2024-07-15 21:14:53.890810] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.849 [2024-07-15 21:14:53.890816] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.849 [2024-07-15 21:14:53.890819] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.890823] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa71fc0) on tqpair=0x9eeec0 00:24:26.849 [2024-07-15 21:14:53.890828] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:26.849 [2024-07-15 21:14:53.890836] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:26.849 [2024-07-15 21:14:53.890842] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.890846] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.890849] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9eeec0) 00:24:26.849 [2024-07-15 21:14:53.890856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.849 [2024-07-15 21:14:53.890866] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa71fc0, cid 0, qid 0 00:24:26.849 [2024-07-15 21:14:53.891070] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.849 [2024-07-15 21:14:53.891077] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.849 [2024-07-15 21:14:53.891080] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.891084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa71fc0) on tqpair=0x9eeec0 00:24:26.849 [2024-07-15 21:14:53.891091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:26.849 [2024-07-15 21:14:53.891100] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.891104] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.891108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9eeec0) 00:24:26.849 [2024-07-15 21:14:53.891114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.849 [2024-07-15 21:14:53.891124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa71fc0, cid 0, qid 0 00:24:26.849 [2024-07-15 21:14:53.891329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.849 [2024-07-15 21:14:53.891336] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.849 [2024-07-15 21:14:53.891340] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.891343] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa71fc0) on tqpair=0x9eeec0 00:24:26.849 [2024-07-15 21:14:53.891348] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:26.849 [2024-07-15 21:14:53.891353] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:26.849 [2024-07-15 21:14:53.891360] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:26.849 [2024-07-15 21:14:53.891466] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:26.849 [2024-07-15 21:14:53.891470] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:26.849 [2024-07-15 21:14:53.891479] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.891483] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.891486] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9eeec0) 00:24:26.849 [2024-07-15 21:14:53.891493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.849 [2024-07-15 21:14:53.891503] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa71fc0, cid 0, qid 0 00:24:26.849 [2024-07-15 21:14:53.891707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.849 [2024-07-15 21:14:53.891713] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.849 [2024-07-15 21:14:53.891717] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.891720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa71fc0) on tqpair=0x9eeec0 00:24:26.849 [2024-07-15 21:14:53.891725] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:26.849 [2024-07-15 21:14:53.891734] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.891737] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.891741] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9eeec0) 00:24:26.849 [2024-07-15 21:14:53.891747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.849 [2024-07-15 21:14:53.891757] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa71fc0, cid 0, qid 0 00:24:26.849 [2024-07-15 21:14:53.891936] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.849 [2024-07-15 21:14:53.891942] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.849 [2024-07-15 21:14:53.891947] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.891951] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa71fc0) on tqpair=0x9eeec0 00:24:26.849 [2024-07-15 21:14:53.891956] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:26.849 [2024-07-15 21:14:53.891960] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:26.849 [2024-07-15 21:14:53.891967] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:26.849 [2024-07-15 21:14:53.891983] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:26.849 [2024-07-15 21:14:53.891992] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.849 [2024-07-15 21:14:53.891995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9eeec0) 00:24:26.849 [2024-07-15 21:14:53.892002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.849 [2024-07-15 21:14:53.892012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa71fc0, cid 0, qid 0 00:24:26.850 [2024-07-15 21:14:53.892217] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:26.850 [2024-07-15 21:14:53.892224] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:26.850 [2024-07-15 21:14:53.892227] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.892240] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9eeec0): datao=0, datal=4096, cccid=0 00:24:26.850 [2024-07-15 21:14:53.892245] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa71fc0) on tqpair(0x9eeec0): expected_datao=0, payload_size=4096 00:24:26.850 [2024-07-15 21:14:53.892250] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.892257] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.892261] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.892400] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.850 [2024-07-15 21:14:53.892406] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.850 [2024-07-15 21:14:53.892410] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.892414] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa71fc0) on tqpair=0x9eeec0 00:24:26.850 [2024-07-15 21:14:53.892421] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:26.850 [2024-07-15 21:14:53.892428] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:26.850 [2024-07-15 21:14:53.892433] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:26.850 [2024-07-15 21:14:53.892438] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:26.850 [2024-07-15 21:14:53.892442] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:26.850 [2024-07-15 21:14:53.892447] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:26.850 [2024-07-15 21:14:53.892455] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:26.850 [2024-07-15 21:14:53.892462] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.892465] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.892469] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9eeec0) 00:24:26.850 [2024-07-15 21:14:53.892478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:26.850 [2024-07-15 21:14:53.892489] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa71fc0, cid 0, qid 0 00:24:26.850 [2024-07-15 21:14:53.892684] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.850 [2024-07-15 21:14:53.892691] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.850 [2024-07-15 21:14:53.892694] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.892698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa71fc0) on tqpair=0x9eeec0 00:24:26.850 [2024-07-15 21:14:53.892706] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.892709] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.892713] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9eeec0) 00:24:26.850 [2024-07-15 21:14:53.892719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.850 [2024-07-15 21:14:53.892725] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.892729] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.892732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9eeec0) 00:24:26.850 [2024-07-15 21:14:53.892738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.850 [2024-07-15 21:14:53.892744] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.892747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.892751] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9eeec0) 00:24:26.850 [2024-07-15 21:14:53.892756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.850 [2024-07-15 21:14:53.892762] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.892766] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.892769] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eeec0) 00:24:26.850 [2024-07-15 21:14:53.892775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.850 [2024-07-15 21:14:53.892779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:26.850 [2024-07-15 21:14:53.892790] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:26.850 [2024-07-15 21:14:53.892796] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.892799] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9eeec0) 00:24:26.850 [2024-07-15 21:14:53.892806] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.850 [2024-07-15 21:14:53.892817] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa71fc0, cid 0, qid 0 00:24:26.850 [2024-07-15 21:14:53.892822] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa72140, cid 1, qid 0 00:24:26.850 [2024-07-15 21:14:53.892827] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa722c0, cid 2, qid 0 00:24:26.850 [2024-07-15 21:14:53.892832] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa72440, cid 3, qid 0 00:24:26.850 [2024-07-15 21:14:53.892836] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa725c0, cid 4, qid 0 00:24:26.850 [2024-07-15 21:14:53.893076] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.850 [2024-07-15 21:14:53.893083] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.850 [2024-07-15 21:14:53.893088] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.893091] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa725c0) on tqpair=0x9eeec0 00:24:26.850 [2024-07-15 21:14:53.893097] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:26.850 [2024-07-15 21:14:53.893102] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:26.850 [2024-07-15 21:14:53.893112] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.893115] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9eeec0) 00:24:26.850 [2024-07-15 21:14:53.893122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.850 [2024-07-15 21:14:53.893131] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa725c0, cid 4, qid 0 00:24:26.850 [2024-07-15 21:14:53.893305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:26.850 [2024-07-15 21:14:53.893311] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:26.850 [2024-07-15 21:14:53.893315] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.893318] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9eeec0): datao=0, datal=4096, cccid=4 00:24:26.850 [2024-07-15 21:14:53.893323] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa725c0) on tqpair(0x9eeec0): expected_datao=0, payload_size=4096 00:24:26.850 [2024-07-15 21:14:53.893327] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.893354] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.893358] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.893496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.850 [2024-07-15 21:14:53.893502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.850 [2024-07-15 21:14:53.893506] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.893509] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa725c0) on tqpair=0x9eeec0 00:24:26.850 [2024-07-15 21:14:53.893521] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:26.850 [2024-07-15 21:14:53.893541] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.893545] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9eeec0) 00:24:26.850 [2024-07-15 21:14:53.893552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.850 [2024-07-15 21:14:53.893558] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.893562] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.893565] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9eeec0) 00:24:26.850 [2024-07-15 21:14:53.893571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.850 [2024-07-15 21:14:53.893584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa725c0, cid 4, qid 0 00:24:26.850 [2024-07-15 21:14:53.893589] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa72740, cid 5, qid 0 00:24:26.850 [2024-07-15 21:14:53.893848] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:26.850 [2024-07-15 21:14:53.893854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:26.850 [2024-07-15 21:14:53.893858] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.893861] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9eeec0): datao=0, datal=1024, cccid=4 00:24:26.850 [2024-07-15 21:14:53.893868] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa725c0) on tqpair(0x9eeec0): expected_datao=0, payload_size=1024 00:24:26.850 [2024-07-15 21:14:53.893872] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.893878] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.893882] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.893887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.850 [2024-07-15 21:14:53.893893] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.850 [2024-07-15 21:14:53.893896] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.893900] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa72740) on tqpair=0x9eeec0 00:24:26.850 [2024-07-15 21:14:53.937238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.850 [2024-07-15 21:14:53.937247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.850 [2024-07-15 21:14:53.937251] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.937255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa725c0) on tqpair=0x9eeec0 00:24:26.850 [2024-07-15 21:14:53.937269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.850 [2024-07-15 21:14:53.937273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9eeec0) 00:24:26.850 [2024-07-15 21:14:53.937280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.850 [2024-07-15 21:14:53.937295] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa725c0, cid 4, qid 0 00:24:26.850 [2024-07-15 21:14:53.937495] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:26.850 [2024-07-15 21:14:53.937502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:26.850 [2024-07-15 21:14:53.937505] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:26.851 [2024-07-15 21:14:53.937509] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9eeec0): datao=0, datal=3072, cccid=4 00:24:26.851 [2024-07-15 21:14:53.937513] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa725c0) on tqpair(0x9eeec0): expected_datao=0, payload_size=3072 00:24:26.851 [2024-07-15 21:14:53.937517] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.851 [2024-07-15 21:14:53.937545] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:26.851 [2024-07-15 21:14:53.937549] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:26.851 [2024-07-15 21:14:53.978410] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.851 [2024-07-15 21:14:53.978420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.851 [2024-07-15 21:14:53.978423] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.851 [2024-07-15 21:14:53.978427] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa725c0) on tqpair=0x9eeec0 00:24:26.851 [2024-07-15 21:14:53.978436] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.851 [2024-07-15 21:14:53.978440] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9eeec0) 00:24:26.851 [2024-07-15 21:14:53.978447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.851 [2024-07-15 21:14:53.978461] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa725c0, cid 4, qid 0 00:24:26.851 [2024-07-15 21:14:53.978646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:26.851 [2024-07-15 21:14:53.978652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:26.851 [2024-07-15 21:14:53.978656] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:26.851 [2024-07-15 21:14:53.978659] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9eeec0): datao=0, datal=8, cccid=4 00:24:26.851 [2024-07-15 21:14:53.978664] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa725c0) on tqpair(0x9eeec0): expected_datao=0, payload_size=8 00:24:26.851 [2024-07-15 21:14:53.978673] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.851 [2024-07-15 21:14:53.978680] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:26.851 [2024-07-15 21:14:53.978684] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:26.851 [2024-07-15 21:14:54.019426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.851 [2024-07-15 21:14:54.019435] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.851 [2024-07-15 21:14:54.019439] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.851 [2024-07-15 21:14:54.019443] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa725c0) on tqpair=0x9eeec0 00:24:26.851 ===================================================== 00:24:26.851 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:26.851 ===================================================== 00:24:26.851 Controller Capabilities/Features 00:24:26.851 ================================ 00:24:26.851 Vendor ID: 0000 00:24:26.851 Subsystem Vendor ID: 0000 00:24:26.851 Serial Number: .................... 00:24:26.851 Model Number: ........................................ 00:24:26.851 Firmware Version: 24.09 00:24:26.851 Recommended Arb Burst: 0 00:24:26.851 IEEE OUI Identifier: 00 00 00 00:24:26.851 Multi-path I/O 00:24:26.851 May have multiple subsystem ports: No 00:24:26.851 May have multiple controllers: No 00:24:26.851 Associated with SR-IOV VF: No 00:24:26.851 Max Data Transfer Size: 131072 00:24:26.851 Max Number of Namespaces: 0 00:24:26.851 Max Number of I/O Queues: 1024 00:24:26.851 NVMe Specification Version (VS): 1.3 00:24:26.851 NVMe Specification Version (Identify): 1.3 00:24:26.851 Maximum Queue Entries: 128 00:24:26.851 Contiguous Queues Required: Yes 00:24:26.851 Arbitration Mechanisms Supported 00:24:26.851 Weighted Round Robin: Not Supported 00:24:26.851 Vendor Specific: Not Supported 00:24:26.851 Reset Timeout: 15000 ms 00:24:26.851 Doorbell Stride: 4 bytes 00:24:26.851 NVM Subsystem Reset: Not Supported 00:24:26.851 Command Sets Supported 00:24:26.851 NVM Command Set: Supported 00:24:26.851 Boot Partition: Not Supported 00:24:26.851 Memory Page Size Minimum: 4096 bytes 00:24:26.851 Memory Page Size Maximum: 4096 bytes 00:24:26.851 Persistent Memory Region: Not Supported 00:24:26.851 Optional Asynchronous Events Supported 00:24:26.851 Namespace Attribute Notices: Not Supported 00:24:26.851 Firmware Activation Notices: Not Supported 00:24:26.851 ANA Change Notices: Not Supported 00:24:26.851 PLE Aggregate Log Change Notices: Not Supported 00:24:26.851 LBA Status Info Alert Notices: Not Supported 00:24:26.851 EGE Aggregate Log Change Notices: Not Supported 00:24:26.851 Normal NVM Subsystem Shutdown event: Not Supported 00:24:26.851 Zone Descriptor Change Notices: Not Supported 00:24:26.851 Discovery Log Change Notices: Supported 00:24:26.851 Controller Attributes 00:24:26.851 128-bit Host Identifier: Not Supported 00:24:26.851 Non-Operational Permissive Mode: Not Supported 00:24:26.851 NVM Sets: Not Supported 00:24:26.851 Read Recovery Levels: Not Supported 00:24:26.851 Endurance Groups: Not Supported 00:24:26.851 Predictable Latency Mode: Not Supported 00:24:26.851 Traffic Based Keep ALive: Not Supported 00:24:26.851 Namespace Granularity: Not Supported 00:24:26.851 SQ Associations: Not Supported 00:24:26.851 UUID List: Not Supported 00:24:26.851 Multi-Domain Subsystem: Not Supported 00:24:26.851 Fixed Capacity Management: Not Supported 00:24:26.851 Variable Capacity Management: Not Supported 00:24:26.851 Delete Endurance Group: Not Supported 00:24:26.851 Delete NVM Set: Not Supported 00:24:26.851 Extended LBA Formats Supported: Not Supported 00:24:26.851 Flexible Data Placement Supported: Not Supported 00:24:26.851 00:24:26.851 Controller Memory Buffer Support 00:24:26.851 ================================ 00:24:26.851 Supported: No 00:24:26.851 00:24:26.851 Persistent Memory Region Support 00:24:26.851 ================================ 00:24:26.851 Supported: No 00:24:26.851 00:24:26.851 Admin Command Set Attributes 00:24:26.851 ============================ 00:24:26.851 Security Send/Receive: Not Supported 00:24:26.851 Format NVM: Not Supported 00:24:26.851 Firmware Activate/Download: Not Supported 00:24:26.851 Namespace Management: Not Supported 00:24:26.851 Device Self-Test: Not Supported 00:24:26.851 Directives: Not Supported 00:24:26.851 NVMe-MI: Not Supported 00:24:26.851 Virtualization Management: Not Supported 00:24:26.851 Doorbell Buffer Config: Not Supported 00:24:26.851 Get LBA Status Capability: Not Supported 00:24:26.851 Command & Feature Lockdown Capability: Not Supported 00:24:26.851 Abort Command Limit: 1 00:24:26.851 Async Event Request Limit: 4 00:24:26.851 Number of Firmware Slots: N/A 00:24:26.851 Firmware Slot 1 Read-Only: N/A 00:24:26.851 Firmware Activation Without Reset: N/A 00:24:26.851 Multiple Update Detection Support: N/A 00:24:26.851 Firmware Update Granularity: No Information Provided 00:24:26.851 Per-Namespace SMART Log: No 00:24:26.851 Asymmetric Namespace Access Log Page: Not Supported 00:24:26.851 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:26.851 Command Effects Log Page: Not Supported 00:24:26.851 Get Log Page Extended Data: Supported 00:24:26.851 Telemetry Log Pages: Not Supported 00:24:26.851 Persistent Event Log Pages: Not Supported 00:24:26.851 Supported Log Pages Log Page: May Support 00:24:26.851 Commands Supported & Effects Log Page: Not Supported 00:24:26.851 Feature Identifiers & Effects Log Page:May Support 00:24:26.851 NVMe-MI Commands & Effects Log Page: May Support 00:24:26.851 Data Area 4 for Telemetry Log: Not Supported 00:24:26.851 Error Log Page Entries Supported: 128 00:24:26.851 Keep Alive: Not Supported 00:24:26.851 00:24:26.851 NVM Command Set Attributes 00:24:26.851 ========================== 00:24:26.851 Submission Queue Entry Size 00:24:26.851 Max: 1 00:24:26.851 Min: 1 00:24:26.851 Completion Queue Entry Size 00:24:26.851 Max: 1 00:24:26.851 Min: 1 00:24:26.851 Number of Namespaces: 0 00:24:26.851 Compare Command: Not Supported 00:24:26.851 Write Uncorrectable Command: Not Supported 00:24:26.851 Dataset Management Command: Not Supported 00:24:26.851 Write Zeroes Command: Not Supported 00:24:26.851 Set Features Save Field: Not Supported 00:24:26.851 Reservations: Not Supported 00:24:26.851 Timestamp: Not Supported 00:24:26.851 Copy: Not Supported 00:24:26.851 Volatile Write Cache: Not Present 00:24:26.851 Atomic Write Unit (Normal): 1 00:24:26.851 Atomic Write Unit (PFail): 1 00:24:26.851 Atomic Compare & Write Unit: 1 00:24:26.851 Fused Compare & Write: Supported 00:24:26.851 Scatter-Gather List 00:24:26.851 SGL Command Set: Supported 00:24:26.851 SGL Keyed: Supported 00:24:26.851 SGL Bit Bucket Descriptor: Not Supported 00:24:26.851 SGL Metadata Pointer: Not Supported 00:24:26.851 Oversized SGL: Not Supported 00:24:26.851 SGL Metadata Address: Not Supported 00:24:26.851 SGL Offset: Supported 00:24:26.851 Transport SGL Data Block: Not Supported 00:24:26.851 Replay Protected Memory Block: Not Supported 00:24:26.851 00:24:26.851 Firmware Slot Information 00:24:26.851 ========================= 00:24:26.851 Active slot: 0 00:24:26.851 00:24:26.851 00:24:26.851 Error Log 00:24:26.851 ========= 00:24:26.851 00:24:26.851 Active Namespaces 00:24:26.851 ================= 00:24:26.851 Discovery Log Page 00:24:26.851 ================== 00:24:26.851 Generation Counter: 2 00:24:26.851 Number of Records: 2 00:24:26.851 Record Format: 0 00:24:26.851 00:24:26.851 Discovery Log Entry 0 00:24:26.851 ---------------------- 00:24:26.851 Transport Type: 3 (TCP) 00:24:26.851 Address Family: 1 (IPv4) 00:24:26.851 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:26.851 Entry Flags: 00:24:26.851 Duplicate Returned Information: 1 00:24:26.851 Explicit Persistent Connection Support for Discovery: 1 00:24:26.851 Transport Requirements: 00:24:26.851 Secure Channel: Not Required 00:24:26.852 Port ID: 0 (0x0000) 00:24:26.852 Controller ID: 65535 (0xffff) 00:24:26.852 Admin Max SQ Size: 128 00:24:26.852 Transport Service Identifier: 4420 00:24:26.852 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:26.852 Transport Address: 10.0.0.2 00:24:26.852 Discovery Log Entry 1 00:24:26.852 ---------------------- 00:24:26.852 Transport Type: 3 (TCP) 00:24:26.852 Address Family: 1 (IPv4) 00:24:26.852 Subsystem Type: 2 (NVM Subsystem) 00:24:26.852 Entry Flags: 00:24:26.852 Duplicate Returned Information: 0 00:24:26.852 Explicit Persistent Connection Support for Discovery: 0 00:24:26.852 Transport Requirements: 00:24:26.852 Secure Channel: Not Required 00:24:26.852 Port ID: 0 (0x0000) 00:24:26.852 Controller ID: 65535 (0xffff) 00:24:26.852 Admin Max SQ Size: 128 00:24:26.852 Transport Service Identifier: 4420 00:24:26.852 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:26.852 Transport Address: 10.0.0.2 [2024-07-15 21:14:54.019528] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:26.852 [2024-07-15 21:14:54.019539] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa71fc0) on tqpair=0x9eeec0 00:24:26.852 [2024-07-15 21:14:54.019545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.852 [2024-07-15 21:14:54.019550] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa72140) on tqpair=0x9eeec0 00:24:26.852 [2024-07-15 21:14:54.019555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.852 [2024-07-15 21:14:54.019560] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa722c0) on tqpair=0x9eeec0 00:24:26.852 [2024-07-15 21:14:54.019564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.852 [2024-07-15 21:14:54.019569] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa72440) on tqpair=0x9eeec0 00:24:26.852 [2024-07-15 21:14:54.019574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.852 [2024-07-15 21:14:54.019584] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.019588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.019591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eeec0) 00:24:26.852 [2024-07-15 21:14:54.019598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.852 [2024-07-15 21:14:54.019611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa72440, cid 3, qid 0 00:24:26.852 [2024-07-15 21:14:54.019727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.852 [2024-07-15 21:14:54.019733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.852 [2024-07-15 21:14:54.019737] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.019741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa72440) on tqpair=0x9eeec0 00:24:26.852 [2024-07-15 21:14:54.019748] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.019752] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.019755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eeec0) 00:24:26.852 [2024-07-15 21:14:54.019762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.852 [2024-07-15 21:14:54.019775] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa72440, cid 3, qid 0 00:24:26.852 [2024-07-15 21:14:54.019998] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.852 [2024-07-15 21:14:54.020005] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.852 [2024-07-15 21:14:54.020008] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.020012] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa72440) on tqpair=0x9eeec0 00:24:26.852 [2024-07-15 21:14:54.020017] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:26.852 [2024-07-15 21:14:54.020023] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:26.852 [2024-07-15 21:14:54.020032] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.020036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.020040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eeec0) 00:24:26.852 [2024-07-15 21:14:54.020046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.852 [2024-07-15 21:14:54.020056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa72440, cid 3, qid 0 00:24:26.852 [2024-07-15 21:14:54.020270] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.852 [2024-07-15 21:14:54.020277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.852 [2024-07-15 21:14:54.020280] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.020284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa72440) on tqpair=0x9eeec0 00:24:26.852 [2024-07-15 21:14:54.020294] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.020298] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.020302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eeec0) 00:24:26.852 [2024-07-15 21:14:54.020308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.852 [2024-07-15 21:14:54.020318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa72440, cid 3, qid 0 00:24:26.852 [2024-07-15 21:14:54.020545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.852 [2024-07-15 21:14:54.020551] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.852 [2024-07-15 21:14:54.020554] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.020558] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa72440) on tqpair=0x9eeec0 00:24:26.852 [2024-07-15 21:14:54.020568] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.020572] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.020575] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eeec0) 00:24:26.852 [2024-07-15 21:14:54.020582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.852 [2024-07-15 21:14:54.020591] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa72440, cid 3, qid 0 00:24:26.852 [2024-07-15 21:14:54.020770] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.852 [2024-07-15 21:14:54.020776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.852 [2024-07-15 21:14:54.020780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.020783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa72440) on tqpair=0x9eeec0 00:24:26.852 [2024-07-15 21:14:54.020793] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.020797] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.020800] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eeec0) 00:24:26.852 [2024-07-15 21:14:54.020807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.852 [2024-07-15 21:14:54.020816] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa72440, cid 3, qid 0 00:24:26.852 [2024-07-15 21:14:54.021007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.852 [2024-07-15 21:14:54.021013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.852 [2024-07-15 21:14:54.021016] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.021020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa72440) on tqpair=0x9eeec0 00:24:26.852 [2024-07-15 21:14:54.021032] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.021036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.021039] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eeec0) 00:24:26.852 [2024-07-15 21:14:54.021046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.852 [2024-07-15 21:14:54.021055] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa72440, cid 3, qid 0 00:24:26.852 [2024-07-15 21:14:54.021228] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.852 [2024-07-15 21:14:54.025242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.852 [2024-07-15 21:14:54.025246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.025249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa72440) on tqpair=0x9eeec0 00:24:26.852 [2024-07-15 21:14:54.025259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.025263] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.025267] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eeec0) 00:24:26.852 [2024-07-15 21:14:54.025273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.852 [2024-07-15 21:14:54.025285] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa72440, cid 3, qid 0 00:24:26.852 [2024-07-15 21:14:54.025473] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.852 [2024-07-15 21:14:54.025480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.852 [2024-07-15 21:14:54.025483] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.852 [2024-07-15 21:14:54.025487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa72440) on tqpair=0x9eeec0 00:24:26.852 [2024-07-15 21:14:54.025495] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:24:26.852 00:24:26.852 21:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:26.852 [2024-07-15 21:14:54.064767] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:24:26.852 [2024-07-15 21:14:54.064809] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2065391 ] 00:24:26.852 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.852 [2024-07-15 21:14:54.100805] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:26.852 [2024-07-15 21:14:54.100851] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:26.852 [2024-07-15 21:14:54.100856] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:26.852 [2024-07-15 21:14:54.100868] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:26.852 [2024-07-15 21:14:54.100874] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:26.853 [2024-07-15 21:14:54.101315] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:26.853 [2024-07-15 21:14:54.101340] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x18c3ec0 0 00:24:26.853 [2024-07-15 21:14:54.108240] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:26.853 [2024-07-15 21:14:54.108252] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:26.853 [2024-07-15 21:14:54.108257] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:26.853 [2024-07-15 21:14:54.108260] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:26.853 [2024-07-15 21:14:54.108292] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.108298] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.108302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c3ec0) 00:24:26.853 [2024-07-15 21:14:54.108313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:26.853 [2024-07-15 21:14:54.108328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946fc0, cid 0, qid 0 00:24:26.853 [2024-07-15 21:14:54.115239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.853 [2024-07-15 21:14:54.115247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.853 [2024-07-15 21:14:54.115251] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.115255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946fc0) on tqpair=0x18c3ec0 00:24:26.853 [2024-07-15 21:14:54.115267] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:26.853 [2024-07-15 21:14:54.115273] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:26.853 [2024-07-15 21:14:54.115278] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:26.853 [2024-07-15 21:14:54.115290] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.115294] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.115298] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c3ec0) 00:24:26.853 [2024-07-15 21:14:54.115306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.853 [2024-07-15 21:14:54.115318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946fc0, cid 0, qid 0 00:24:26.853 [2024-07-15 21:14:54.115541] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.853 [2024-07-15 21:14:54.115548] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.853 [2024-07-15 21:14:54.115551] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.115555] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946fc0) on tqpair=0x18c3ec0 00:24:26.853 [2024-07-15 21:14:54.115560] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:26.853 [2024-07-15 21:14:54.115567] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:26.853 [2024-07-15 21:14:54.115573] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.115577] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.115580] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c3ec0) 00:24:26.853 [2024-07-15 21:14:54.115587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.853 [2024-07-15 21:14:54.115597] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946fc0, cid 0, qid 0 00:24:26.853 [2024-07-15 21:14:54.115789] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.853 [2024-07-15 21:14:54.115795] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.853 [2024-07-15 21:14:54.115799] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.115803] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946fc0) on tqpair=0x18c3ec0 00:24:26.853 [2024-07-15 21:14:54.115808] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:26.853 [2024-07-15 21:14:54.115818] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:26.853 [2024-07-15 21:14:54.115824] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.115828] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.115831] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c3ec0) 00:24:26.853 [2024-07-15 21:14:54.115838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.853 [2024-07-15 21:14:54.115848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946fc0, cid 0, qid 0 00:24:26.853 [2024-07-15 21:14:54.116024] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.853 [2024-07-15 21:14:54.116030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.853 [2024-07-15 21:14:54.116034] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.116037] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946fc0) on tqpair=0x18c3ec0 00:24:26.853 [2024-07-15 21:14:54.116042] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:26.853 [2024-07-15 21:14:54.116051] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.116055] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.116059] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c3ec0) 00:24:26.853 [2024-07-15 21:14:54.116065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.853 [2024-07-15 21:14:54.116075] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946fc0, cid 0, qid 0 00:24:26.853 [2024-07-15 21:14:54.116292] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.853 [2024-07-15 21:14:54.116299] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.853 [2024-07-15 21:14:54.116302] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.116306] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946fc0) on tqpair=0x18c3ec0 00:24:26.853 [2024-07-15 21:14:54.116311] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:26.853 [2024-07-15 21:14:54.116316] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:26.853 [2024-07-15 21:14:54.116323] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:26.853 [2024-07-15 21:14:54.116428] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:26.853 [2024-07-15 21:14:54.116432] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:26.853 [2024-07-15 21:14:54.116440] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.116444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.116447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c3ec0) 00:24:26.853 [2024-07-15 21:14:54.116454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.853 [2024-07-15 21:14:54.116464] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946fc0, cid 0, qid 0 00:24:26.853 [2024-07-15 21:14:54.116693] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.853 [2024-07-15 21:14:54.116700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.853 [2024-07-15 21:14:54.116703] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.116709] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946fc0) on tqpair=0x18c3ec0 00:24:26.853 [2024-07-15 21:14:54.116714] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:26.853 [2024-07-15 21:14:54.116723] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.116727] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.116730] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c3ec0) 00:24:26.853 [2024-07-15 21:14:54.116737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.853 [2024-07-15 21:14:54.116747] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946fc0, cid 0, qid 0 00:24:26.853 [2024-07-15 21:14:54.116994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.853 [2024-07-15 21:14:54.117001] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.853 [2024-07-15 21:14:54.117004] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.853 [2024-07-15 21:14:54.117008] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946fc0) on tqpair=0x18c3ec0 00:24:26.853 [2024-07-15 21:14:54.117012] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:26.853 [2024-07-15 21:14:54.117017] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:26.853 [2024-07-15 21:14:54.117024] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:26.853 [2024-07-15 21:14:54.117031] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:26.853 [2024-07-15 21:14:54.117039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.117043] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c3ec0) 00:24:26.854 [2024-07-15 21:14:54.117050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.854 [2024-07-15 21:14:54.117060] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946fc0, cid 0, qid 0 00:24:26.854 [2024-07-15 21:14:54.117275] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:26.854 [2024-07-15 21:14:54.117282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:26.854 [2024-07-15 21:14:54.117285] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.117289] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c3ec0): datao=0, datal=4096, cccid=0 00:24:26.854 [2024-07-15 21:14:54.117294] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1946fc0) on tqpair(0x18c3ec0): expected_datao=0, payload_size=4096 00:24:26.854 [2024-07-15 21:14:54.117298] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.117328] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.117332] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.117549] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.854 [2024-07-15 21:14:54.117555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.854 [2024-07-15 21:14:54.117558] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.117562] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946fc0) on tqpair=0x18c3ec0 00:24:26.854 [2024-07-15 21:14:54.117569] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:26.854 [2024-07-15 21:14:54.117576] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:26.854 [2024-07-15 21:14:54.117580] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:26.854 [2024-07-15 21:14:54.117586] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:26.854 [2024-07-15 21:14:54.117591] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:26.854 [2024-07-15 21:14:54.117595] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:26.854 [2024-07-15 21:14:54.117603] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:26.854 [2024-07-15 21:14:54.117610] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.117614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.117617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c3ec0) 00:24:26.854 [2024-07-15 21:14:54.117624] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:26.854 [2024-07-15 21:14:54.117635] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946fc0, cid 0, qid 0 00:24:26.854 [2024-07-15 21:14:54.117825] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.854 [2024-07-15 21:14:54.117832] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.854 [2024-07-15 21:14:54.117835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.117839] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946fc0) on tqpair=0x18c3ec0 00:24:26.854 [2024-07-15 21:14:54.117845] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.117849] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.117852] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c3ec0) 00:24:26.854 [2024-07-15 21:14:54.117858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.854 [2024-07-15 21:14:54.117864] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.117868] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.117871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x18c3ec0) 00:24:26.854 [2024-07-15 21:14:54.117877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.854 [2024-07-15 21:14:54.117883] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.117887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.117890] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x18c3ec0) 00:24:26.854 [2024-07-15 21:14:54.117896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.854 [2024-07-15 21:14:54.117902] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.117905] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.117909] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3ec0) 00:24:26.854 [2024-07-15 21:14:54.117914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.854 [2024-07-15 21:14:54.117919] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:26.854 [2024-07-15 21:14:54.117929] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:26.854 [2024-07-15 21:14:54.117935] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.117938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18c3ec0) 00:24:26.854 [2024-07-15 21:14:54.117947] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.854 [2024-07-15 21:14:54.117958] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946fc0, cid 0, qid 0 00:24:26.854 [2024-07-15 21:14:54.117963] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1947140, cid 1, qid 0 00:24:26.854 [2024-07-15 21:14:54.117968] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19472c0, cid 2, qid 0 00:24:26.854 [2024-07-15 21:14:54.117972] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1947440, cid 3, qid 0 00:24:26.854 [2024-07-15 21:14:54.117977] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19475c0, cid 4, qid 0 00:24:26.854 [2024-07-15 21:14:54.118206] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.854 [2024-07-15 21:14:54.118212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.854 [2024-07-15 21:14:54.118216] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.118220] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19475c0) on tqpair=0x18c3ec0 00:24:26.854 [2024-07-15 21:14:54.118224] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:26.854 [2024-07-15 21:14:54.118235] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:26.854 [2024-07-15 21:14:54.118243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:26.854 [2024-07-15 21:14:54.118249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:26.854 [2024-07-15 21:14:54.118255] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.118259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.118262] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18c3ec0) 00:24:26.854 [2024-07-15 21:14:54.118269] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:26.854 [2024-07-15 21:14:54.118279] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19475c0, cid 4, qid 0 00:24:26.854 [2024-07-15 21:14:54.118508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:26.854 [2024-07-15 21:14:54.118514] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:26.854 [2024-07-15 21:14:54.118518] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.118521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19475c0) on tqpair=0x18c3ec0 00:24:26.854 [2024-07-15 21:14:54.118585] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:26.854 [2024-07-15 21:14:54.118594] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:26.854 [2024-07-15 21:14:54.118602] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.118605] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18c3ec0) 00:24:26.854 [2024-07-15 21:14:54.118612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.854 [2024-07-15 21:14:54.118622] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19475c0, cid 4, qid 0 00:24:26.854 [2024-07-15 21:14:54.118824] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:26.854 [2024-07-15 21:14:54.118830] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:26.854 [2024-07-15 21:14:54.118834] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.118840] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c3ec0): datao=0, datal=4096, cccid=4 00:24:26.854 [2024-07-15 21:14:54.118845] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19475c0) on tqpair(0x18c3ec0): expected_datao=0, payload_size=4096 00:24:26.854 [2024-07-15 21:14:54.118849] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.118875] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:26.854 [2024-07-15 21:14:54.118879] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.118 [2024-07-15 21:14:54.161240] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.118 [2024-07-15 21:14:54.161254] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.118 [2024-07-15 21:14:54.161259] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.118 [2024-07-15 21:14:54.161263] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19475c0) on tqpair=0x18c3ec0 00:24:27.118 [2024-07-15 21:14:54.161275] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:27.118 [2024-07-15 21:14:54.161286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:27.118 [2024-07-15 21:14:54.161296] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:27.118 [2024-07-15 21:14:54.161304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.118 [2024-07-15 21:14:54.161308] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18c3ec0) 00:24:27.118 [2024-07-15 21:14:54.161316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.118 [2024-07-15 21:14:54.161330] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19475c0, cid 4, qid 0 00:24:27.118 [2024-07-15 21:14:54.161528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.118 [2024-07-15 21:14:54.161535] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.118 [2024-07-15 21:14:54.161538] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.118 [2024-07-15 21:14:54.161542] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c3ec0): datao=0, datal=4096, cccid=4 00:24:27.118 [2024-07-15 21:14:54.161546] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19475c0) on tqpair(0x18c3ec0): expected_datao=0, payload_size=4096 00:24:27.118 [2024-07-15 21:14:54.161551] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.118 [2024-07-15 21:14:54.161577] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.118 [2024-07-15 21:14:54.161581] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.118 [2024-07-15 21:14:54.202470] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.118 [2024-07-15 21:14:54.202480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.118 [2024-07-15 21:14:54.202483] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.118 [2024-07-15 21:14:54.202487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19475c0) on tqpair=0x18c3ec0 00:24:27.118 [2024-07-15 21:14:54.202501] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:27.118 [2024-07-15 21:14:54.202510] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:27.118 [2024-07-15 21:14:54.202518] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.118 [2024-07-15 21:14:54.202521] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18c3ec0) 00:24:27.118 [2024-07-15 21:14:54.202529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.118 [2024-07-15 21:14:54.202541] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19475c0, cid 4, qid 0 00:24:27.118 [2024-07-15 21:14:54.202696] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.118 [2024-07-15 21:14:54.202703] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.118 [2024-07-15 21:14:54.202706] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.118 [2024-07-15 21:14:54.202710] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c3ec0): datao=0, datal=4096, cccid=4 00:24:27.118 [2024-07-15 21:14:54.202714] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19475c0) on tqpair(0x18c3ec0): expected_datao=0, payload_size=4096 00:24:27.118 [2024-07-15 21:14:54.202719] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.118 [2024-07-15 21:14:54.202744] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.118 [2024-07-15 21:14:54.202748] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.118 [2024-07-15 21:14:54.243425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.118 [2024-07-15 21:14:54.243434] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.118 [2024-07-15 21:14:54.243437] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.118 [2024-07-15 21:14:54.243441] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19475c0) on tqpair=0x18c3ec0 00:24:27.118 [2024-07-15 21:14:54.243449] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:27.118 [2024-07-15 21:14:54.243457] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:27.118 [2024-07-15 21:14:54.243466] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:27.118 [2024-07-15 21:14:54.243472] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:27.118 [2024-07-15 21:14:54.243478] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:27.118 [2024-07-15 21:14:54.243483] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:27.118 [2024-07-15 21:14:54.243488] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:27.118 [2024-07-15 21:14:54.243492] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:27.118 [2024-07-15 21:14:54.243497] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:27.118 [2024-07-15 21:14:54.243511] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.118 [2024-07-15 21:14:54.243515] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18c3ec0) 00:24:27.118 [2024-07-15 21:14:54.243522] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.118 [2024-07-15 21:14:54.243529] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.118 [2024-07-15 21:14:54.243532] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.118 [2024-07-15 21:14:54.243536] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18c3ec0) 00:24:27.118 [2024-07-15 21:14:54.243542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.118 [2024-07-15 21:14:54.243556] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19475c0, cid 4, qid 0 00:24:27.118 [2024-07-15 21:14:54.243561] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1947740, cid 5, qid 0 00:24:27.118 [2024-07-15 21:14:54.243688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.118 [2024-07-15 21:14:54.243694] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.118 [2024-07-15 21:14:54.243700] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.118 [2024-07-15 21:14:54.243704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19475c0) on tqpair=0x18c3ec0 00:24:27.118 [2024-07-15 21:14:54.243711] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.118 [2024-07-15 21:14:54.243717] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.118 [2024-07-15 21:14:54.243720] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.243724] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1947740) on tqpair=0x18c3ec0 00:24:27.119 [2024-07-15 21:14:54.243733] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.243737] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18c3ec0) 00:24:27.119 [2024-07-15 21:14:54.243743] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.119 [2024-07-15 21:14:54.243753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1947740, cid 5, qid 0 00:24:27.119 [2024-07-15 21:14:54.243952] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.119 [2024-07-15 21:14:54.243958] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.119 [2024-07-15 21:14:54.243962] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.243966] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1947740) on tqpair=0x18c3ec0 00:24:27.119 [2024-07-15 21:14:54.243974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.243978] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18c3ec0) 00:24:27.119 [2024-07-15 21:14:54.243984] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.119 [2024-07-15 21:14:54.243994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1947740, cid 5, qid 0 00:24:27.119 [2024-07-15 21:14:54.244192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.119 [2024-07-15 21:14:54.244198] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.119 [2024-07-15 21:14:54.244202] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.244205] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1947740) on tqpair=0x18c3ec0 00:24:27.119 [2024-07-15 21:14:54.244214] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.244218] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18c3ec0) 00:24:27.119 [2024-07-15 21:14:54.244224] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.119 [2024-07-15 21:14:54.244240] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1947740, cid 5, qid 0 00:24:27.119 [2024-07-15 21:14:54.244445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.119 [2024-07-15 21:14:54.244452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.119 [2024-07-15 21:14:54.244455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.244459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1947740) on tqpair=0x18c3ec0 00:24:27.119 [2024-07-15 21:14:54.244473] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.244477] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18c3ec0) 00:24:27.119 [2024-07-15 21:14:54.244483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.119 [2024-07-15 21:14:54.244491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.244494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18c3ec0) 00:24:27.119 [2024-07-15 21:14:54.244501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.119 [2024-07-15 21:14:54.244510] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.244514] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x18c3ec0) 00:24:27.119 [2024-07-15 21:14:54.244520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.119 [2024-07-15 21:14:54.244527] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.244530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x18c3ec0) 00:24:27.119 [2024-07-15 21:14:54.244537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.119 [2024-07-15 21:14:54.244548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1947740, cid 5, qid 0 00:24:27.119 [2024-07-15 21:14:54.244553] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19475c0, cid 4, qid 0 00:24:27.119 [2024-07-15 21:14:54.244558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19478c0, cid 6, qid 0 00:24:27.119 [2024-07-15 21:14:54.244562] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1947a40, cid 7, qid 0 00:24:27.119 [2024-07-15 21:14:54.244813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.119 [2024-07-15 21:14:54.244819] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.119 [2024-07-15 21:14:54.244823] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.244826] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c3ec0): datao=0, datal=8192, cccid=5 00:24:27.119 [2024-07-15 21:14:54.244831] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1947740) on tqpair(0x18c3ec0): expected_datao=0, payload_size=8192 00:24:27.119 [2024-07-15 21:14:54.244835] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.244925] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.244929] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.244935] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.119 [2024-07-15 21:14:54.244940] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.119 [2024-07-15 21:14:54.244944] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.244947] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c3ec0): datao=0, datal=512, cccid=4 00:24:27.119 [2024-07-15 21:14:54.244952] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19475c0) on tqpair(0x18c3ec0): expected_datao=0, payload_size=512 00:24:27.119 [2024-07-15 21:14:54.244956] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.244986] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.244990] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.244996] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.119 [2024-07-15 21:14:54.245002] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.119 [2024-07-15 21:14:54.245005] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.245009] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c3ec0): datao=0, datal=512, cccid=6 00:24:27.119 [2024-07-15 21:14:54.245013] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19478c0) on tqpair(0x18c3ec0): expected_datao=0, payload_size=512 00:24:27.119 [2024-07-15 21:14:54.245017] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.245023] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.245027] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.245033] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.119 [2024-07-15 21:14:54.245040] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.119 [2024-07-15 21:14:54.245043] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.245047] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c3ec0): datao=0, datal=4096, cccid=7 00:24:27.119 [2024-07-15 21:14:54.245051] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1947a40) on tqpair(0x18c3ec0): expected_datao=0, payload_size=4096 00:24:27.119 [2024-07-15 21:14:54.245055] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.245062] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.245065] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.249237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.119 [2024-07-15 21:14:54.249245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.119 [2024-07-15 21:14:54.249248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.249252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1947740) on tqpair=0x18c3ec0 00:24:27.119 [2024-07-15 21:14:54.249265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.119 [2024-07-15 21:14:54.249271] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.119 [2024-07-15 21:14:54.249274] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.249278] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19475c0) on tqpair=0x18c3ec0 00:24:27.119 [2024-07-15 21:14:54.249288] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.119 [2024-07-15 21:14:54.249293] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.119 [2024-07-15 21:14:54.249297] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.249301] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19478c0) on tqpair=0x18c3ec0 00:24:27.119 [2024-07-15 21:14:54.249307] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.119 [2024-07-15 21:14:54.249313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.119 [2024-07-15 21:14:54.249317] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.119 [2024-07-15 21:14:54.249320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1947a40) on tqpair=0x18c3ec0 00:24:27.119 ===================================================== 00:24:27.119 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:27.119 ===================================================== 00:24:27.119 Controller Capabilities/Features 00:24:27.119 ================================ 00:24:27.119 Vendor ID: 8086 00:24:27.119 Subsystem Vendor ID: 8086 00:24:27.119 Serial Number: SPDK00000000000001 00:24:27.119 Model Number: SPDK bdev Controller 00:24:27.119 Firmware Version: 24.09 00:24:27.119 Recommended Arb Burst: 6 00:24:27.119 IEEE OUI Identifier: e4 d2 5c 00:24:27.119 Multi-path I/O 00:24:27.119 May have multiple subsystem ports: Yes 00:24:27.119 May have multiple controllers: Yes 00:24:27.119 Associated with SR-IOV VF: No 00:24:27.119 Max Data Transfer Size: 131072 00:24:27.119 Max Number of Namespaces: 32 00:24:27.119 Max Number of I/O Queues: 127 00:24:27.119 NVMe Specification Version (VS): 1.3 00:24:27.119 NVMe Specification Version (Identify): 1.3 00:24:27.119 Maximum Queue Entries: 128 00:24:27.119 Contiguous Queues Required: Yes 00:24:27.119 Arbitration Mechanisms Supported 00:24:27.119 Weighted Round Robin: Not Supported 00:24:27.119 Vendor Specific: Not Supported 00:24:27.119 Reset Timeout: 15000 ms 00:24:27.119 Doorbell Stride: 4 bytes 00:24:27.119 NVM Subsystem Reset: Not Supported 00:24:27.119 Command Sets Supported 00:24:27.119 NVM Command Set: Supported 00:24:27.119 Boot Partition: Not Supported 00:24:27.119 Memory Page Size Minimum: 4096 bytes 00:24:27.119 Memory Page Size Maximum: 4096 bytes 00:24:27.119 Persistent Memory Region: Not Supported 00:24:27.119 Optional Asynchronous Events Supported 00:24:27.119 Namespace Attribute Notices: Supported 00:24:27.119 Firmware Activation Notices: Not Supported 00:24:27.119 ANA Change Notices: Not Supported 00:24:27.119 PLE Aggregate Log Change Notices: Not Supported 00:24:27.119 LBA Status Info Alert Notices: Not Supported 00:24:27.119 EGE Aggregate Log Change Notices: Not Supported 00:24:27.119 Normal NVM Subsystem Shutdown event: Not Supported 00:24:27.120 Zone Descriptor Change Notices: Not Supported 00:24:27.120 Discovery Log Change Notices: Not Supported 00:24:27.120 Controller Attributes 00:24:27.120 128-bit Host Identifier: Supported 00:24:27.120 Non-Operational Permissive Mode: Not Supported 00:24:27.120 NVM Sets: Not Supported 00:24:27.120 Read Recovery Levels: Not Supported 00:24:27.120 Endurance Groups: Not Supported 00:24:27.120 Predictable Latency Mode: Not Supported 00:24:27.120 Traffic Based Keep ALive: Not Supported 00:24:27.120 Namespace Granularity: Not Supported 00:24:27.120 SQ Associations: Not Supported 00:24:27.120 UUID List: Not Supported 00:24:27.120 Multi-Domain Subsystem: Not Supported 00:24:27.120 Fixed Capacity Management: Not Supported 00:24:27.120 Variable Capacity Management: Not Supported 00:24:27.120 Delete Endurance Group: Not Supported 00:24:27.120 Delete NVM Set: Not Supported 00:24:27.120 Extended LBA Formats Supported: Not Supported 00:24:27.120 Flexible Data Placement Supported: Not Supported 00:24:27.120 00:24:27.120 Controller Memory Buffer Support 00:24:27.120 ================================ 00:24:27.120 Supported: No 00:24:27.120 00:24:27.120 Persistent Memory Region Support 00:24:27.120 ================================ 00:24:27.120 Supported: No 00:24:27.120 00:24:27.120 Admin Command Set Attributes 00:24:27.120 ============================ 00:24:27.120 Security Send/Receive: Not Supported 00:24:27.120 Format NVM: Not Supported 00:24:27.120 Firmware Activate/Download: Not Supported 00:24:27.120 Namespace Management: Not Supported 00:24:27.120 Device Self-Test: Not Supported 00:24:27.120 Directives: Not Supported 00:24:27.120 NVMe-MI: Not Supported 00:24:27.120 Virtualization Management: Not Supported 00:24:27.120 Doorbell Buffer Config: Not Supported 00:24:27.120 Get LBA Status Capability: Not Supported 00:24:27.120 Command & Feature Lockdown Capability: Not Supported 00:24:27.120 Abort Command Limit: 4 00:24:27.120 Async Event Request Limit: 4 00:24:27.120 Number of Firmware Slots: N/A 00:24:27.120 Firmware Slot 1 Read-Only: N/A 00:24:27.120 Firmware Activation Without Reset: N/A 00:24:27.120 Multiple Update Detection Support: N/A 00:24:27.120 Firmware Update Granularity: No Information Provided 00:24:27.120 Per-Namespace SMART Log: No 00:24:27.120 Asymmetric Namespace Access Log Page: Not Supported 00:24:27.120 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:27.120 Command Effects Log Page: Supported 00:24:27.120 Get Log Page Extended Data: Supported 00:24:27.120 Telemetry Log Pages: Not Supported 00:24:27.120 Persistent Event Log Pages: Not Supported 00:24:27.120 Supported Log Pages Log Page: May Support 00:24:27.120 Commands Supported & Effects Log Page: Not Supported 00:24:27.120 Feature Identifiers & Effects Log Page:May Support 00:24:27.120 NVMe-MI Commands & Effects Log Page: May Support 00:24:27.120 Data Area 4 for Telemetry Log: Not Supported 00:24:27.120 Error Log Page Entries Supported: 128 00:24:27.120 Keep Alive: Supported 00:24:27.120 Keep Alive Granularity: 10000 ms 00:24:27.120 00:24:27.120 NVM Command Set Attributes 00:24:27.120 ========================== 00:24:27.120 Submission Queue Entry Size 00:24:27.120 Max: 64 00:24:27.120 Min: 64 00:24:27.120 Completion Queue Entry Size 00:24:27.120 Max: 16 00:24:27.120 Min: 16 00:24:27.120 Number of Namespaces: 32 00:24:27.120 Compare Command: Supported 00:24:27.120 Write Uncorrectable Command: Not Supported 00:24:27.120 Dataset Management Command: Supported 00:24:27.120 Write Zeroes Command: Supported 00:24:27.120 Set Features Save Field: Not Supported 00:24:27.120 Reservations: Supported 00:24:27.120 Timestamp: Not Supported 00:24:27.120 Copy: Supported 00:24:27.120 Volatile Write Cache: Present 00:24:27.120 Atomic Write Unit (Normal): 1 00:24:27.120 Atomic Write Unit (PFail): 1 00:24:27.120 Atomic Compare & Write Unit: 1 00:24:27.120 Fused Compare & Write: Supported 00:24:27.120 Scatter-Gather List 00:24:27.120 SGL Command Set: Supported 00:24:27.120 SGL Keyed: Supported 00:24:27.120 SGL Bit Bucket Descriptor: Not Supported 00:24:27.120 SGL Metadata Pointer: Not Supported 00:24:27.120 Oversized SGL: Not Supported 00:24:27.120 SGL Metadata Address: Not Supported 00:24:27.120 SGL Offset: Supported 00:24:27.120 Transport SGL Data Block: Not Supported 00:24:27.120 Replay Protected Memory Block: Not Supported 00:24:27.120 00:24:27.120 Firmware Slot Information 00:24:27.120 ========================= 00:24:27.120 Active slot: 1 00:24:27.120 Slot 1 Firmware Revision: 24.09 00:24:27.120 00:24:27.120 00:24:27.120 Commands Supported and Effects 00:24:27.120 ============================== 00:24:27.120 Admin Commands 00:24:27.120 -------------- 00:24:27.120 Get Log Page (02h): Supported 00:24:27.120 Identify (06h): Supported 00:24:27.120 Abort (08h): Supported 00:24:27.120 Set Features (09h): Supported 00:24:27.120 Get Features (0Ah): Supported 00:24:27.120 Asynchronous Event Request (0Ch): Supported 00:24:27.120 Keep Alive (18h): Supported 00:24:27.120 I/O Commands 00:24:27.120 ------------ 00:24:27.120 Flush (00h): Supported LBA-Change 00:24:27.120 Write (01h): Supported LBA-Change 00:24:27.120 Read (02h): Supported 00:24:27.120 Compare (05h): Supported 00:24:27.120 Write Zeroes (08h): Supported LBA-Change 00:24:27.120 Dataset Management (09h): Supported LBA-Change 00:24:27.120 Copy (19h): Supported LBA-Change 00:24:27.120 00:24:27.120 Error Log 00:24:27.120 ========= 00:24:27.120 00:24:27.120 Arbitration 00:24:27.120 =========== 00:24:27.120 Arbitration Burst: 1 00:24:27.120 00:24:27.120 Power Management 00:24:27.120 ================ 00:24:27.120 Number of Power States: 1 00:24:27.120 Current Power State: Power State #0 00:24:27.120 Power State #0: 00:24:27.120 Max Power: 0.00 W 00:24:27.120 Non-Operational State: Operational 00:24:27.120 Entry Latency: Not Reported 00:24:27.120 Exit Latency: Not Reported 00:24:27.120 Relative Read Throughput: 0 00:24:27.120 Relative Read Latency: 0 00:24:27.120 Relative Write Throughput: 0 00:24:27.120 Relative Write Latency: 0 00:24:27.120 Idle Power: Not Reported 00:24:27.120 Active Power: Not Reported 00:24:27.120 Non-Operational Permissive Mode: Not Supported 00:24:27.120 00:24:27.120 Health Information 00:24:27.120 ================== 00:24:27.120 Critical Warnings: 00:24:27.120 Available Spare Space: OK 00:24:27.120 Temperature: OK 00:24:27.120 Device Reliability: OK 00:24:27.120 Read Only: No 00:24:27.120 Volatile Memory Backup: OK 00:24:27.120 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:27.120 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:27.120 Available Spare: 0% 00:24:27.120 Available Spare Threshold: 0% 00:24:27.120 Life Percentage Used:[2024-07-15 21:14:54.249420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.120 [2024-07-15 21:14:54.249425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x18c3ec0) 00:24:27.120 [2024-07-15 21:14:54.249432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.120 [2024-07-15 21:14:54.249445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1947a40, cid 7, qid 0 00:24:27.120 [2024-07-15 21:14:54.249678] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.120 [2024-07-15 21:14:54.249684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.120 [2024-07-15 21:14:54.249687] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.120 [2024-07-15 21:14:54.249691] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1947a40) on tqpair=0x18c3ec0 00:24:27.120 [2024-07-15 21:14:54.249722] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:27.120 [2024-07-15 21:14:54.249731] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946fc0) on tqpair=0x18c3ec0 00:24:27.120 [2024-07-15 21:14:54.249738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.120 [2024-07-15 21:14:54.249743] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1947140) on tqpair=0x18c3ec0 00:24:27.120 [2024-07-15 21:14:54.249747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.120 [2024-07-15 21:14:54.249752] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19472c0) on tqpair=0x18c3ec0 00:24:27.120 [2024-07-15 21:14:54.249759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.120 [2024-07-15 21:14:54.249764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1947440) on tqpair=0x18c3ec0 00:24:27.120 [2024-07-15 21:14:54.249768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.120 [2024-07-15 21:14:54.249776] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.120 [2024-07-15 21:14:54.249780] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.120 [2024-07-15 21:14:54.249783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3ec0) 00:24:27.120 [2024-07-15 21:14:54.249790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.120 [2024-07-15 21:14:54.249802] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1947440, cid 3, qid 0 00:24:27.120 [2024-07-15 21:14:54.249980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.120 [2024-07-15 21:14:54.249987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.120 [2024-07-15 21:14:54.249990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.120 [2024-07-15 21:14:54.249994] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1947440) on tqpair=0x18c3ec0 00:24:27.120 [2024-07-15 21:14:54.250000] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.120 [2024-07-15 21:14:54.250004] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.250008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3ec0) 00:24:27.121 [2024-07-15 21:14:54.250014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.121 [2024-07-15 21:14:54.250028] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1947440, cid 3, qid 0 00:24:27.121 [2024-07-15 21:14:54.250244] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.121 [2024-07-15 21:14:54.250250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.121 [2024-07-15 21:14:54.250254] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.250258] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1947440) on tqpair=0x18c3ec0 00:24:27.121 [2024-07-15 21:14:54.250262] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:27.121 [2024-07-15 21:14:54.250267] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:27.121 [2024-07-15 21:14:54.250276] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.250280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.250283] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3ec0) 00:24:27.121 [2024-07-15 21:14:54.250290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.121 [2024-07-15 21:14:54.250300] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1947440, cid 3, qid 0 00:24:27.121 [2024-07-15 21:14:54.250534] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.121 [2024-07-15 21:14:54.250540] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.121 [2024-07-15 21:14:54.250543] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.250547] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1947440) on tqpair=0x18c3ec0 00:24:27.121 [2024-07-15 21:14:54.250557] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.250561] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.250564] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3ec0) 00:24:27.121 [2024-07-15 21:14:54.250573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.121 [2024-07-15 21:14:54.250583] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1947440, cid 3, qid 0 00:24:27.121 [2024-07-15 21:14:54.250786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.121 [2024-07-15 21:14:54.250793] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.121 [2024-07-15 21:14:54.250796] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.250800] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1947440) on tqpair=0x18c3ec0 00:24:27.121 [2024-07-15 21:14:54.250810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.250814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.250817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3ec0) 00:24:27.121 [2024-07-15 21:14:54.250824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.121 [2024-07-15 21:14:54.250833] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1947440, cid 3, qid 0 00:24:27.121 [2024-07-15 21:14:54.251089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.121 [2024-07-15 21:14:54.251095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.121 [2024-07-15 21:14:54.251098] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.251102] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1947440) on tqpair=0x18c3ec0 00:24:27.121 [2024-07-15 21:14:54.251111] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.251115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.251119] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3ec0) 00:24:27.121 [2024-07-15 21:14:54.251125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.121 [2024-07-15 21:14:54.251135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1947440, cid 3, qid 0 00:24:27.121 [2024-07-15 21:14:54.251322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.121 [2024-07-15 21:14:54.251329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.121 [2024-07-15 21:14:54.251332] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.251336] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1947440) on tqpair=0x18c3ec0 00:24:27.121 [2024-07-15 21:14:54.251345] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.251349] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.251353] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3ec0) 00:24:27.121 [2024-07-15 21:14:54.251360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.121 [2024-07-15 21:14:54.251369] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1947440, cid 3, qid 0 00:24:27.121 [2024-07-15 21:14:54.251591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.121 [2024-07-15 21:14:54.251597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.121 [2024-07-15 21:14:54.251600] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.251604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1947440) on tqpair=0x18c3ec0 00:24:27.121 [2024-07-15 21:14:54.251614] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.251618] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.251621] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3ec0) 00:24:27.121 [2024-07-15 21:14:54.251628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.121 [2024-07-15 21:14:54.251639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1947440, cid 3, qid 0 00:24:27.121 [2024-07-15 21:14:54.251895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.121 [2024-07-15 21:14:54.251902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.121 [2024-07-15 21:14:54.251905] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.251909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1947440) on tqpair=0x18c3ec0 00:24:27.121 [2024-07-15 21:14:54.251918] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.251922] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.251926] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3ec0) 00:24:27.121 [2024-07-15 21:14:54.251932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.121 [2024-07-15 21:14:54.251942] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1947440, cid 3, qid 0 00:24:27.121 [2024-07-15 21:14:54.252148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.121 [2024-07-15 21:14:54.252154] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.121 [2024-07-15 21:14:54.252157] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.252161] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1947440) on tqpair=0x18c3ec0 00:24:27.121 [2024-07-15 21:14:54.252170] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.252174] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.252178] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3ec0) 00:24:27.121 [2024-07-15 21:14:54.252184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.121 [2024-07-15 21:14:54.252194] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1947440, cid 3, qid 0 00:24:27.121 [2024-07-15 21:14:54.256238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.121 [2024-07-15 21:14:54.256246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.121 [2024-07-15 21:14:54.256250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.121 [2024-07-15 21:14:54.256253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1947440) on tqpair=0x18c3ec0 00:24:27.121 [2024-07-15 21:14:54.256261] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:24:27.121 0% 00:24:27.121 Data Units Read: 0 00:24:27.121 Data Units Written: 0 00:24:27.121 Host Read Commands: 0 00:24:27.121 Host Write Commands: 0 00:24:27.121 Controller Busy Time: 0 minutes 00:24:27.121 Power Cycles: 0 00:24:27.121 Power On Hours: 0 hours 00:24:27.121 Unsafe Shutdowns: 0 00:24:27.121 Unrecoverable Media Errors: 0 00:24:27.121 Lifetime Error Log Entries: 0 00:24:27.121 Warning Temperature Time: 0 minutes 00:24:27.121 Critical Temperature Time: 0 minutes 00:24:27.121 00:24:27.121 Number of Queues 00:24:27.121 ================ 00:24:27.121 Number of I/O Submission Queues: 127 00:24:27.121 Number of I/O Completion Queues: 127 00:24:27.121 00:24:27.121 Active Namespaces 00:24:27.121 ================= 00:24:27.121 Namespace ID:1 00:24:27.121 Error Recovery Timeout: Unlimited 00:24:27.121 Command Set Identifier: NVM (00h) 00:24:27.121 Deallocate: Supported 00:24:27.121 Deallocated/Unwritten Error: Not Supported 00:24:27.121 Deallocated Read Value: Unknown 00:24:27.121 Deallocate in Write Zeroes: Not Supported 00:24:27.121 Deallocated Guard Field: 0xFFFF 00:24:27.121 Flush: Supported 00:24:27.121 Reservation: Supported 00:24:27.121 Namespace Sharing Capabilities: Multiple Controllers 00:24:27.121 Size (in LBAs): 131072 (0GiB) 00:24:27.121 Capacity (in LBAs): 131072 (0GiB) 00:24:27.121 Utilization (in LBAs): 131072 (0GiB) 00:24:27.121 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:27.121 EUI64: ABCDEF0123456789 00:24:27.121 UUID: fb08e134-9ff8-433c-99a2-82333bcdfabb 00:24:27.121 Thin Provisioning: Not Supported 00:24:27.121 Per-NS Atomic Units: Yes 00:24:27.121 Atomic Boundary Size (Normal): 0 00:24:27.121 Atomic Boundary Size (PFail): 0 00:24:27.121 Atomic Boundary Offset: 0 00:24:27.121 Maximum Single Source Range Length: 65535 00:24:27.121 Maximum Copy Length: 65535 00:24:27.121 Maximum Source Range Count: 1 00:24:27.121 NGUID/EUI64 Never Reused: No 00:24:27.121 Namespace Write Protected: No 00:24:27.121 Number of LBA Formats: 1 00:24:27.121 Current LBA Format: LBA Format #00 00:24:27.121 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:27.121 00:24:27.121 21:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:27.121 21:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.121 21:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.121 21:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:27.122 rmmod nvme_tcp 00:24:27.122 rmmod nvme_fabrics 00:24:27.122 rmmod nvme_keyring 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2065168 ']' 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2065168 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2065168 ']' 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2065168 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2065168 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2065168' 00:24:27.122 killing process with pid 2065168 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2065168 00:24:27.122 21:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2065168 00:24:27.383 21:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:27.383 21:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:27.383 21:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:27.383 21:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:27.383 21:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:27.383 21:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.383 21:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.383 21:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.930 21:14:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:29.930 00:24:29.930 real 0m12.189s 00:24:29.930 user 0m8.158s 00:24:29.930 sys 0m6.646s 00:24:29.930 21:14:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:29.930 21:14:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:29.930 ************************************ 00:24:29.930 END TEST nvmf_identify 00:24:29.930 ************************************ 00:24:29.930 21:14:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:29.930 21:14:56 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:29.930 21:14:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:29.930 21:14:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:29.930 21:14:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:29.930 ************************************ 00:24:29.930 START TEST nvmf_perf 00:24:29.930 ************************************ 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:29.930 * Looking for test storage... 00:24:29.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:29.930 21:14:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:38.074 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:38.074 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:38.074 Found net devices under 0000:31:00.0: cvl_0_0 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:38.074 Found net devices under 0000:31:00.1: cvl_0_1 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:38.074 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:38.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:24:38.075 00:24:38.075 --- 10.0.0.2 ping statistics --- 00:24:38.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.075 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:24:38.075 00:24:38.075 --- 10.0.0.1 ping statistics --- 00:24:38.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.075 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2070167 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2070167 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2070167 ']' 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:38.075 21:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:38.075 [2024-07-15 21:15:05.016464] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:24:38.075 [2024-07-15 21:15:05.016529] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.075 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.075 [2024-07-15 21:15:05.099097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:38.075 [2024-07-15 21:15:05.174141] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.075 [2024-07-15 21:15:05.174175] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.075 [2024-07-15 21:15:05.174183] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.075 [2024-07-15 21:15:05.174189] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.075 [2024-07-15 21:15:05.174195] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.075 [2024-07-15 21:15:05.174287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.075 [2024-07-15 21:15:05.174473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.075 [2024-07-15 21:15:05.174474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:38.075 [2024-07-15 21:15:05.174317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.644 21:15:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:38.644 21:15:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:24:38.644 21:15:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:38.644 21:15:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:38.644 21:15:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:38.645 21:15:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.645 21:15:05 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:38.645 21:15:05 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:39.216 21:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:39.216 21:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:39.216 21:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:39.216 21:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:39.477 21:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:39.477 21:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:39.477 21:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:39.477 21:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:39.477 21:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:39.738 [2024-07-15 21:15:06.809625] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.738 21:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:39.738 21:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:39.738 21:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:40.024 21:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:40.024 21:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:40.285 21:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.285 [2024-07-15 21:15:07.488140] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.285 21:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:40.545 21:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:40.545 21:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:40.545 21:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:40.545 21:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:41.928 Initializing NVMe Controllers 00:24:41.928 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:41.928 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:41.928 Initialization complete. Launching workers. 00:24:41.928 ======================================================== 00:24:41.928 Latency(us) 00:24:41.928 Device Information : IOPS MiB/s Average min max 00:24:41.928 PCIE (0000:65:00.0) NSID 1 from core 0: 79678.82 311.25 401.01 13.38 7195.59 00:24:41.928 ======================================================== 00:24:41.928 Total : 79678.82 311.25 401.01 13.38 7195.59 00:24:41.928 00:24:41.928 21:15:08 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:41.928 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.313 Initializing NVMe Controllers 00:24:43.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:43.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:43.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:43.313 Initialization complete. Launching workers. 00:24:43.313 ======================================================== 00:24:43.313 Latency(us) 00:24:43.313 Device Information : IOPS MiB/s Average min max 00:24:43.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 98.00 0.38 10342.68 232.41 46343.08 00:24:43.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16485.27 6995.36 48887.82 00:24:43.313 ======================================================== 00:24:43.313 Total : 159.00 0.62 12699.27 232.41 48887.82 00:24:43.313 00:24:43.313 21:15:10 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:43.313 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.345 Initializing NVMe Controllers 00:24:44.345 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:44.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:44.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:44.345 Initialization complete. Launching workers. 00:24:44.345 ======================================================== 00:24:44.345 Latency(us) 00:24:44.345 Device Information : IOPS MiB/s Average min max 00:24:44.345 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11147.78 43.55 2870.05 490.82 6370.07 00:24:44.345 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3831.92 14.97 8395.86 6887.95 16010.21 00:24:44.345 ======================================================== 00:24:44.345 Total : 14979.70 58.51 4283.60 490.82 16010.21 00:24:44.345 00:24:44.345 21:15:11 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:44.345 21:15:11 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:44.345 21:15:11 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:44.617 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.156 Initializing NVMe Controllers 00:24:47.156 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:47.156 Controller IO queue size 128, less than required. 00:24:47.156 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:47.156 Controller IO queue size 128, less than required. 00:24:47.156 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:47.156 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:47.156 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:47.156 Initialization complete. Launching workers. 00:24:47.156 ======================================================== 00:24:47.156 Latency(us) 00:24:47.156 Device Information : IOPS MiB/s Average min max 00:24:47.156 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1136.48 284.12 115538.16 64153.91 182297.37 00:24:47.156 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 585.49 146.37 223606.79 62350.07 344403.28 00:24:47.156 ======================================================== 00:24:47.156 Total : 1721.97 430.49 152282.75 62350.07 344403.28 00:24:47.156 00:24:47.156 21:15:13 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:47.156 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.156 No valid NVMe controllers or AIO or URING devices found 00:24:47.156 Initializing NVMe Controllers 00:24:47.156 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:47.156 Controller IO queue size 128, less than required. 00:24:47.156 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:47.156 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:47.156 Controller IO queue size 128, less than required. 00:24:47.156 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:47.156 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:47.156 WARNING: Some requested NVMe devices were skipped 00:24:47.156 21:15:14 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:47.156 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.711 Initializing NVMe Controllers 00:24:49.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:49.711 Controller IO queue size 128, less than required. 00:24:49.711 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:49.711 Controller IO queue size 128, less than required. 00:24:49.711 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:49.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:49.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:49.711 Initialization complete. Launching workers. 00:24:49.711 00:24:49.711 ==================== 00:24:49.711 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:49.711 TCP transport: 00:24:49.711 polls: 33378 00:24:49.711 idle_polls: 14332 00:24:49.711 sock_completions: 19046 00:24:49.711 nvme_completions: 4973 00:24:49.711 submitted_requests: 7394 00:24:49.711 queued_requests: 1 00:24:49.711 00:24:49.711 ==================== 00:24:49.711 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:49.711 TCP transport: 00:24:49.711 polls: 32371 00:24:49.711 idle_polls: 13165 00:24:49.711 sock_completions: 19206 00:24:49.711 nvme_completions: 5011 00:24:49.711 submitted_requests: 7472 00:24:49.711 queued_requests: 1 00:24:49.711 ======================================================== 00:24:49.711 Latency(us) 00:24:49.711 Device Information : IOPS MiB/s Average min max 00:24:49.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1240.17 310.04 106628.06 72252.85 152805.21 00:24:49.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1249.64 312.41 104419.77 46811.90 149514.16 00:24:49.711 ======================================================== 00:24:49.711 Total : 2489.81 622.45 105519.71 46811.90 152805.21 00:24:49.711 00:24:49.711 21:15:16 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:49.711 21:15:16 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:49.971 rmmod nvme_tcp 00:24:49.971 rmmod nvme_fabrics 00:24:49.971 rmmod nvme_keyring 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2070167 ']' 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2070167 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2070167 ']' 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2070167 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2070167 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:49.971 21:15:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2070167' 00:24:49.971 killing process with pid 2070167 00:24:49.972 21:15:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2070167 00:24:49.972 21:15:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2070167 00:24:51.883 21:15:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:51.883 21:15:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:51.883 21:15:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:51.883 21:15:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:51.883 21:15:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:51.883 21:15:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.883 21:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.883 21:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.424 21:15:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:54.424 00:24:54.424 real 0m24.514s 00:24:54.424 user 0m57.567s 00:24:54.424 sys 0m8.513s 00:24:54.424 21:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:54.424 21:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:54.424 ************************************ 00:24:54.424 END TEST nvmf_perf 00:24:54.424 ************************************ 00:24:54.424 21:15:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:54.424 21:15:21 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:54.424 21:15:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:54.424 21:15:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:54.424 21:15:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:54.424 ************************************ 00:24:54.424 START TEST nvmf_fio_host 00:24:54.424 ************************************ 00:24:54.424 21:15:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:54.424 * Looking for test storage... 00:24:54.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:54.424 21:15:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.424 21:15:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.424 21:15:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.424 21:15:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.424 21:15:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.424 21:15:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.424 21:15:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.424 21:15:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:54.424 21:15:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:54.425 21:15:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:02.566 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:02.566 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:02.566 Found net devices under 0000:31:00.0: cvl_0_0 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:02.566 Found net devices under 0000:31:00.1: cvl_0_1 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:02.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:02.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:25:02.566 00:25:02.566 --- 10.0.0.2 ping statistics --- 00:25:02.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.566 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:02.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:02.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:25:02.566 00:25:02.566 --- 10.0.0.1 ping statistics --- 00:25:02.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.566 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2078070 00:25:02.566 21:15:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:02.567 21:15:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:02.567 21:15:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2078070 00:25:02.567 21:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2078070 ']' 00:25:02.567 21:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.567 21:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:02.567 21:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.567 21:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:02.567 21:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.567 [2024-07-15 21:15:29.609525] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:25:02.567 [2024-07-15 21:15:29.609590] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.567 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.567 [2024-07-15 21:15:29.688900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:02.567 [2024-07-15 21:15:29.764204] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.567 [2024-07-15 21:15:29.764249] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.567 [2024-07-15 21:15:29.764256] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.567 [2024-07-15 21:15:29.764263] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.567 [2024-07-15 21:15:29.764269] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.567 [2024-07-15 21:15:29.764392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.567 [2024-07-15 21:15:29.764540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.567 [2024-07-15 21:15:29.764930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:02.567 [2024-07-15 21:15:29.764931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.138 21:15:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:03.138 21:15:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:25:03.138 21:15:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:03.397 [2024-07-15 21:15:30.533116] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.397 21:15:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:03.397 21:15:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:03.397 21:15:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.397 21:15:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:03.657 Malloc1 00:25:03.657 21:15:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:03.916 21:15:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:03.916 21:15:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:04.176 [2024-07-15 21:15:31.248008] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.176 21:15:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:04.176 21:15:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:04.176 21:15:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:04.176 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:04.176 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:04.176 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:04.176 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:04.176 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:04.176 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:04.176 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:04.176 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:04.176 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:04.176 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:04.176 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:04.474 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:04.474 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:04.474 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:04.474 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:04.474 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:04.474 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:04.474 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:04.474 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:04.474 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:04.474 21:15:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:04.748 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:04.748 fio-3.35 00:25:04.748 Starting 1 thread 00:25:04.748 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.278 00:25:07.278 test: (groupid=0, jobs=1): err= 0: pid=2078891: Mon Jul 15 21:15:34 2024 00:25:07.278 read: IOPS=13.0k, BW=50.8MiB/s (53.3MB/s)(102MiB/2004msec) 00:25:07.278 slat (usec): min=2, max=294, avg= 2.17, stdev= 2.53 00:25:07.278 clat (usec): min=3753, max=8831, avg=5411.34, stdev=929.17 00:25:07.278 lat (usec): min=3775, max=8833, avg=5413.50, stdev=929.24 00:25:07.278 clat percentiles (usec): 00:25:07.278 | 1.00th=[ 4228], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4817], 00:25:07.278 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:25:07.278 | 70.00th=[ 5342], 80.00th=[ 5669], 90.00th=[ 7177], 95.00th=[ 7504], 00:25:07.278 | 99.00th=[ 8094], 99.50th=[ 8225], 99.90th=[ 8455], 99.95th=[ 8586], 00:25:07.278 | 99.99th=[ 8717] 00:25:07.278 bw ( KiB/s): min=38296, max=56712, per=99.97%, avg=52024.00, stdev=9152.36, samples=4 00:25:07.278 iops : min= 9574, max=14178, avg=13006.00, stdev=2288.09, samples=4 00:25:07.278 write: IOPS=13.0k, BW=50.8MiB/s (53.3MB/s)(102MiB/2004msec); 0 zone resets 00:25:07.278 slat (usec): min=2, max=274, avg= 2.27, stdev= 1.86 00:25:07.278 clat (usec): min=2895, max=7769, avg=4360.62, stdev=756.80 00:25:07.278 lat (usec): min=2913, max=7771, avg=4362.89, stdev=756.90 00:25:07.278 clat percentiles (usec): 00:25:07.278 | 1.00th=[ 3392], 5.00th=[ 3621], 10.00th=[ 3720], 20.00th=[ 3851], 00:25:07.278 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4113], 60.00th=[ 4228], 00:25:07.278 | 70.00th=[ 4293], 80.00th=[ 4555], 90.00th=[ 5800], 95.00th=[ 6063], 00:25:07.278 | 99.00th=[ 6456], 99.50th=[ 6652], 99.90th=[ 6980], 99.95th=[ 7177], 00:25:07.278 | 99.99th=[ 7701] 00:25:07.278 bw ( KiB/s): min=38864, max=56632, per=99.95%, avg=52004.00, stdev=8761.89, samples=4 00:25:07.278 iops : min= 9716, max=14158, avg=13001.00, stdev=2190.47, samples=4 00:25:07.278 lat (msec) : 4=18.22%, 10=81.78% 00:25:07.278 cpu : usr=70.04%, sys=26.66%, ctx=40, majf=0, minf=7 00:25:07.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:07.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:07.278 issued rwts: total=26073,26068,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:07.278 00:25:07.278 Run status group 0 (all jobs): 00:25:07.278 READ: bw=50.8MiB/s (53.3MB/s), 50.8MiB/s-50.8MiB/s (53.3MB/s-53.3MB/s), io=102MiB (107MB), run=2004-2004msec 00:25:07.278 WRITE: bw=50.8MiB/s (53.3MB/s), 50.8MiB/s-50.8MiB/s (53.3MB/s-53.3MB/s), io=102MiB (107MB), run=2004-2004msec 00:25:07.278 21:15:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:07.278 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:07.278 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:07.278 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:07.278 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:07.278 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:07.278 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:07.278 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:07.278 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:07.278 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:07.278 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:07.278 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:07.279 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:07.279 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:07.279 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:07.279 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:07.279 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:07.279 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:07.279 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:07.279 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:07.279 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:07.279 21:15:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:07.279 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:07.279 fio-3.35 00:25:07.279 Starting 1 thread 00:25:07.279 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.811 00:25:09.811 test: (groupid=0, jobs=1): err= 0: pid=2079396: Mon Jul 15 21:15:36 2024 00:25:09.811 read: IOPS=9107, BW=142MiB/s (149MB/s)(286MiB/2007msec) 00:25:09.811 slat (usec): min=3, max=115, avg= 3.64, stdev= 1.70 00:25:09.811 clat (usec): min=1621, max=16288, avg=8561.54, stdev=2152.09 00:25:09.811 lat (usec): min=1624, max=16292, avg=8565.19, stdev=2152.28 00:25:09.811 clat percentiles (usec): 00:25:09.811 | 1.00th=[ 4228], 5.00th=[ 5211], 10.00th=[ 5866], 20.00th=[ 6718], 00:25:09.811 | 30.00th=[ 7308], 40.00th=[ 7898], 50.00th=[ 8455], 60.00th=[ 8979], 00:25:09.811 | 70.00th=[ 9634], 80.00th=[10421], 90.00th=[11600], 95.00th=[11994], 00:25:09.811 | 99.00th=[13960], 99.50th=[14484], 99.90th=[15270], 99.95th=[15664], 00:25:09.811 | 99.99th=[16319] 00:25:09.811 bw ( KiB/s): min=65824, max=83488, per=49.41%, avg=72008.00, stdev=7841.91, samples=4 00:25:09.811 iops : min= 4114, max= 5218, avg=4500.50, stdev=490.12, samples=4 00:25:09.811 write: IOPS=5320, BW=83.1MiB/s (87.2MB/s)(147MiB/1773msec); 0 zone resets 00:25:09.811 slat (usec): min=39, max=459, avg=41.35, stdev= 9.55 00:25:09.811 clat (usec): min=2271, max=18280, avg=9545.76, stdev=1696.54 00:25:09.811 lat (usec): min=2311, max=18418, avg=9587.11, stdev=1699.60 00:25:09.811 clat percentiles (usec): 00:25:09.811 | 1.00th=[ 6325], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 8160], 00:25:09.811 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9765], 00:25:09.811 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11469], 95.00th=[12387], 00:25:09.811 | 99.00th=[14484], 99.50th=[16188], 99.90th=[17695], 99.95th=[18220], 00:25:09.811 | 99.99th=[18220] 00:25:09.811 bw ( KiB/s): min=68736, max=86624, per=88.10%, avg=74992.00, stdev=7953.04, samples=4 00:25:09.811 iops : min= 4296, max= 5414, avg=4687.00, stdev=497.06, samples=4 00:25:09.811 lat (msec) : 2=0.03%, 4=0.49%, 10=70.09%, 20=29.40% 00:25:09.811 cpu : usr=83.55%, sys=14.01%, ctx=14, majf=0, minf=16 00:25:09.811 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:09.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:09.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:09.811 issued rwts: total=18279,9433,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:09.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:09.811 00:25:09.811 Run status group 0 (all jobs): 00:25:09.811 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=286MiB (299MB), run=2007-2007msec 00:25:09.811 WRITE: bw=83.1MiB/s (87.2MB/s), 83.1MiB/s-83.1MiB/s (87.2MB/s-87.2MB/s), io=147MiB (155MB), run=1773-1773msec 00:25:09.811 21:15:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:09.811 21:15:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:09.811 21:15:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:09.811 21:15:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:09.811 21:15:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:09.811 21:15:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:09.811 21:15:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:09.811 21:15:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:09.811 21:15:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:09.811 21:15:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:09.811 21:15:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:09.811 rmmod nvme_tcp 00:25:09.811 rmmod nvme_fabrics 00:25:09.811 rmmod nvme_keyring 00:25:09.811 21:15:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:09.811 21:15:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:09.811 21:15:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:09.811 21:15:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2078070 ']' 00:25:09.811 21:15:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2078070 00:25:09.811 21:15:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2078070 ']' 00:25:09.811 21:15:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2078070 00:25:09.811 21:15:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:25:09.811 21:15:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:09.811 21:15:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2078070 00:25:09.811 21:15:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:09.811 21:15:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:09.811 21:15:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2078070' 00:25:09.811 killing process with pid 2078070 00:25:09.811 21:15:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2078070 00:25:09.811 21:15:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2078070 00:25:10.071 21:15:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:10.071 21:15:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:10.071 21:15:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:10.071 21:15:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:10.071 21:15:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:10.071 21:15:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.071 21:15:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.071 21:15:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.607 21:15:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:12.607 00:25:12.607 real 0m18.011s 00:25:12.607 user 1m5.668s 00:25:12.607 sys 0m7.942s 00:25:12.607 21:15:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:12.607 21:15:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.607 ************************************ 00:25:12.607 END TEST nvmf_fio_host 00:25:12.607 ************************************ 00:25:12.607 21:15:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:12.607 21:15:39 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:12.607 21:15:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:12.607 21:15:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:12.607 21:15:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:12.607 ************************************ 00:25:12.607 START TEST nvmf_failover 00:25:12.607 ************************************ 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:12.607 * Looking for test storage... 00:25:12.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:12.607 21:15:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.736 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:20.737 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:20.737 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:20.737 Found net devices under 0000:31:00.0: cvl_0_0 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:20.737 Found net devices under 0000:31:00.1: cvl_0_1 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:20.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:25:20.737 00:25:20.737 --- 10.0.0.2 ping statistics --- 00:25:20.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.737 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:25:20.737 00:25:20.737 --- 10.0.0.1 ping statistics --- 00:25:20.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.737 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2084597 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2084597 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2084597 ']' 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:20.737 21:15:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:20.737 [2024-07-15 21:15:47.621083] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:25:20.737 [2024-07-15 21:15:47.621150] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.737 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.737 [2024-07-15 21:15:47.720520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:20.737 [2024-07-15 21:15:47.815346] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.737 [2024-07-15 21:15:47.815408] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.737 [2024-07-15 21:15:47.815417] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.737 [2024-07-15 21:15:47.815424] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.737 [2024-07-15 21:15:47.815431] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.737 [2024-07-15 21:15:47.815565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.737 [2024-07-15 21:15:47.815730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.737 [2024-07-15 21:15:47.815731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.307 21:15:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:21.307 21:15:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:21.307 21:15:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:21.307 21:15:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:21.307 21:15:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:21.307 21:15:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.307 21:15:48 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:21.307 [2024-07-15 21:15:48.589414] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.566 21:15:48 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:21.566 Malloc0 00:25:21.566 21:15:48 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:21.826 21:15:48 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:22.086 21:15:49 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:22.086 [2024-07-15 21:15:49.279835] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.086 21:15:49 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:22.346 [2024-07-15 21:15:49.448256] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:22.347 21:15:49 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:22.347 [2024-07-15 21:15:49.616753] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:22.605 21:15:49 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:22.605 21:15:49 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2085085 00:25:22.605 21:15:49 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:22.605 21:15:49 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2085085 /var/tmp/bdevperf.sock 00:25:22.605 21:15:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2085085 ']' 00:25:22.605 21:15:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:22.605 21:15:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.605 21:15:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:22.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:22.605 21:15:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.605 21:15:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:23.538 21:15:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:23.538 21:15:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:23.538 21:15:50 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:23.538 NVMe0n1 00:25:23.538 21:15:50 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:23.797 00:25:23.797 21:15:51 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2085276 00:25:23.797 21:15:51 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:23.797 21:15:51 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:25.176 21:15:52 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.176 [2024-07-15 21:15:52.171674] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171738] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171745] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171763] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171768] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171789] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171811] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171824] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.176 [2024-07-15 21:15:52.171828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171838] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171842] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171860] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171864] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171877] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171882] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171886] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171899] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171935] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171944] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 [2024-07-15 21:15:52.171948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cad0 is same with the state(5) to be set 00:25:25.177 21:15:52 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:28.465 21:15:55 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:28.465 00:25:28.465 21:15:55 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:28.465 [2024-07-15 21:15:55.614793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614835] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614846] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614859] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614876] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614893] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614924] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614934] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.465 [2024-07-15 21:15:55.614943] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.614947] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.614951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.614957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.614961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.614965] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.614970] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.614975] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.614980] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.614985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.614989] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.614993] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.614997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615002] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615024] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615028] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615051] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615056] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615061] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615065] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615070] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615079] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615087] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615092] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615107] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 [2024-07-15 21:15:55.615112] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e200 is same with the state(5) to be set 00:25:28.466 21:15:55 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:31.755 21:15:58 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.755 [2024-07-15 21:15:58.790853] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.755 21:15:58 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:32.699 21:15:59 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:32.699 [2024-07-15 21:15:59.967346] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.699 [2024-07-15 21:15:59.967383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.699 [2024-07-15 21:15:59.967389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.699 [2024-07-15 21:15:59.967394] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.699 [2024-07-15 21:15:59.967399] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.699 [2024-07-15 21:15:59.967403] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.699 [2024-07-15 21:15:59.967408] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.699 [2024-07-15 21:15:59.967412] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.699 [2024-07-15 21:15:59.967417] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.699 [2024-07-15 21:15:59.967421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.699 [2024-07-15 21:15:59.967425] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.699 [2024-07-15 21:15:59.967430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.699 [2024-07-15 21:15:59.967434] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.699 [2024-07-15 21:15:59.967439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.699 [2024-07-15 21:15:59.967443] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.699 [2024-07-15 21:15:59.967447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967461] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967466] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967470] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967479] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967492] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967500] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967509] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967522] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967545] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967554] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967568] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967572] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967577] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967581] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967590] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967604] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967610] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967636] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967641] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967654] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967666] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967671] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967677] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967706] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967715] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967763] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967803] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967838] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967842] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:32.700 [2024-07-15 21:15:59.967866] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182ef70 is same with the state(5) to be set 00:25:33.011 21:15:59 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2085276 00:25:39.681 0 00:25:39.682 21:16:06 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2085085 00:25:39.682 21:16:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2085085 ']' 00:25:39.682 21:16:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2085085 00:25:39.682 21:16:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:39.682 21:16:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:39.682 21:16:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2085085 00:25:39.682 21:16:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:39.682 21:16:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:39.682 21:16:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2085085' 00:25:39.682 killing process with pid 2085085 00:25:39.682 21:16:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2085085 00:25:39.682 21:16:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2085085 00:25:39.682 21:16:06 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:39.682 [2024-07-15 21:15:49.683267] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:25:39.682 [2024-07-15 21:15:49.683322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085085 ] 00:25:39.682 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.682 [2024-07-15 21:15:49.749361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.682 [2024-07-15 21:15:49.813467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.682 Running I/O for 15 seconds... 00:25:39.682 [2024-07-15 21:15:52.172505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.172985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.172992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.173001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.173008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.173017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.173025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.173034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.173041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.173050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.173057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.173066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.173074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.173083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.173091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.173101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.173108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.682 [2024-07-15 21:15:52.173118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.682 [2024-07-15 21:15:52.173125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.683 [2024-07-15 21:15:52.173825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.683 [2024-07-15 21:15:52.173834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.173841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.173851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.173858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.173867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.173874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.173883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.173890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.173899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.173906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.173915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.173922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.173931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.173939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.173948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.173955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.173964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.173971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.173980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.173992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.684 [2024-07-15 21:15:52.174393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.684 [2024-07-15 21:15:52.174411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.684 [2024-07-15 21:15:52.174428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.684 [2024-07-15 21:15:52.174444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.684 [2024-07-15 21:15:52.174461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.684 [2024-07-15 21:15:52.174477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.684 [2024-07-15 21:15:52.174493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.684 [2024-07-15 21:15:52.174509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.684 [2024-07-15 21:15:52.174525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.684 [2024-07-15 21:15:52.174535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:52.174542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:52.174550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:52.174557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:52.174567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:52.174574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:52.174583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:52.174590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:52.174599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:52.174606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:52.174617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:52.174624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:52.174633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:52.174640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:52.174649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:52.174657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:52.174677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.685 [2024-07-15 21:15:52.174684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.685 [2024-07-15 21:15:52.174691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95816 len:8 PRP1 0x0 PRP2 0x0 00:25:39.685 [2024-07-15 21:15:52.174698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:52.174736] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1067e20 was disconnected and freed. reset controller. 00:25:39.685 [2024-07-15 21:15:52.174746] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:39.685 [2024-07-15 21:15:52.174766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.685 [2024-07-15 21:15:52.174774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:52.174783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.685 [2024-07-15 21:15:52.174790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:52.174798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.685 [2024-07-15 21:15:52.174805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:52.174813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.685 [2024-07-15 21:15:52.174820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:52.174834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:39.685 [2024-07-15 21:15:52.178378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:39.685 [2024-07-15 21:15:52.178403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106bfa0 (9): Bad file descriptor 00:25:39.685 [2024-07-15 21:15:52.218481] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:39.685 [2024-07-15 21:15:55.616587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.685 [2024-07-15 21:15:55.616623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.616634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.685 [2024-07-15 21:15:55.616641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.616654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.685 [2024-07-15 21:15:55.616662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.616670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.685 [2024-07-15 21:15:55.616677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.616685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106bfa0 is same with the state(5) to be set 00:25:39.685 [2024-07-15 21:15:55.618212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.685 [2024-07-15 21:15:55.618234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.685 [2024-07-15 21:15:55.618256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.685 [2024-07-15 21:15:55.618272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.685 [2024-07-15 21:15:55.618289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.685 [2024-07-15 21:15:55.618305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.685 [2024-07-15 21:15:55.618321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:55.618338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:55.618354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:55.618370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:55.618388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:55.618407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:55.618423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:55.618440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:55.618456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:55.618473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:55.618489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:55.618505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:55.618521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:55.618538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:55.618554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:55.618570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.685 [2024-07-15 21:15:55.618579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.685 [2024-07-15 21:15:55.618587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.618984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.618994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.619002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.619011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.619018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.619027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.619034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.619046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.619054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.619063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.619070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.619079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.619086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.686 [2024-07-15 21:15:55.619095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.686 [2024-07-15 21:15:55.619103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.687 [2024-07-15 21:15:55.619527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.687 [2024-07-15 21:15:55.619544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.687 [2024-07-15 21:15:55.619561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.687 [2024-07-15 21:15:55.619577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.687 [2024-07-15 21:15:55.619594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.687 [2024-07-15 21:15:55.619610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.687 [2024-07-15 21:15:55.619626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.687 [2024-07-15 21:15:55.619643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.687 [2024-07-15 21:15:55.619659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.687 [2024-07-15 21:15:55.619676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.687 [2024-07-15 21:15:55.619692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.687 [2024-07-15 21:15:55.619708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.687 [2024-07-15 21:15:55.619725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.687 [2024-07-15 21:15:55.619741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.687 [2024-07-15 21:15:55.619757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.687 [2024-07-15 21:15:55.619773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.687 [2024-07-15 21:15:55.619790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.687 [2024-07-15 21:15:55.619807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.687 [2024-07-15 21:15:55.619816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.687 [2024-07-15 21:15:55.619823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.619832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.688 [2024-07-15 21:15:55.619839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.619849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.688 [2024-07-15 21:15:55.619856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.619865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.688 [2024-07-15 21:15:55.619872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.619881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.688 [2024-07-15 21:15:55.619890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.619899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.688 [2024-07-15 21:15:55.619906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.619915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.688 [2024-07-15 21:15:55.619922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.619931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.688 [2024-07-15 21:15:55.619939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.619948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.688 [2024-07-15 21:15:55.619956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.619965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.688 [2024-07-15 21:15:55.619972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.619981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.688 [2024-07-15 21:15:55.619988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.619999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.688 [2024-07-15 21:15:55.620006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.688 [2024-07-15 21:15:55.620022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.688 [2024-07-15 21:15:55.620038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.688 [2024-07-15 21:15:55.620055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.688 [2024-07-15 21:15:55.620071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.688 [2024-07-15 21:15:55.620097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24864 len:8 PRP1 0x0 PRP2 0x0 00:25:39.688 [2024-07-15 21:15:55.620106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.688 [2024-07-15 21:15:55.620122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.688 [2024-07-15 21:15:55.620128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24872 len:8 PRP1 0x0 PRP2 0x0 00:25:39.688 [2024-07-15 21:15:55.620135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.688 [2024-07-15 21:15:55.620149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.688 [2024-07-15 21:15:55.620155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24880 len:8 PRP1 0x0 PRP2 0x0 00:25:39.688 [2024-07-15 21:15:55.620162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.688 [2024-07-15 21:15:55.620175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.688 [2024-07-15 21:15:55.620181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24888 len:8 PRP1 0x0 PRP2 0x0 00:25:39.688 [2024-07-15 21:15:55.620187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.688 [2024-07-15 21:15:55.620200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.688 [2024-07-15 21:15:55.620207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:8 PRP1 0x0 PRP2 0x0 00:25:39.688 [2024-07-15 21:15:55.620214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.688 [2024-07-15 21:15:55.620227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.688 [2024-07-15 21:15:55.620236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24904 len:8 PRP1 0x0 PRP2 0x0 00:25:39.688 [2024-07-15 21:15:55.620243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.688 [2024-07-15 21:15:55.620256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.688 [2024-07-15 21:15:55.620263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24912 len:8 PRP1 0x0 PRP2 0x0 00:25:39.688 [2024-07-15 21:15:55.620270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.688 [2024-07-15 21:15:55.620283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.688 [2024-07-15 21:15:55.620290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24920 len:8 PRP1 0x0 PRP2 0x0 00:25:39.688 [2024-07-15 21:15:55.620297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.688 [2024-07-15 21:15:55.620310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.688 [2024-07-15 21:15:55.620318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24928 len:8 PRP1 0x0 PRP2 0x0 00:25:39.688 [2024-07-15 21:15:55.620325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.688 [2024-07-15 21:15:55.620339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.688 [2024-07-15 21:15:55.620345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24936 len:8 PRP1 0x0 PRP2 0x0 00:25:39.688 [2024-07-15 21:15:55.620351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.688 [2024-07-15 21:15:55.620365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.688 [2024-07-15 21:15:55.620372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24944 len:8 PRP1 0x0 PRP2 0x0 00:25:39.688 [2024-07-15 21:15:55.620379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.688 [2024-07-15 21:15:55.620392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.688 [2024-07-15 21:15:55.620398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24952 len:8 PRP1 0x0 PRP2 0x0 00:25:39.688 [2024-07-15 21:15:55.620405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.688 [2024-07-15 21:15:55.620417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.688 [2024-07-15 21:15:55.620425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:8 PRP1 0x0 PRP2 0x0 00:25:39.688 [2024-07-15 21:15:55.620433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.688 [2024-07-15 21:15:55.620446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.688 [2024-07-15 21:15:55.620452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24968 len:8 PRP1 0x0 PRP2 0x0 00:25:39.688 [2024-07-15 21:15:55.620460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.688 [2024-07-15 21:15:55.620473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.688 [2024-07-15 21:15:55.620480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24976 len:8 PRP1 0x0 PRP2 0x0 00:25:39.688 [2024-07-15 21:15:55.620487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.688 [2024-07-15 21:15:55.620500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.688 [2024-07-15 21:15:55.620506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24984 len:8 PRP1 0x0 PRP2 0x0 00:25:39.688 [2024-07-15 21:15:55.620513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.688 [2024-07-15 21:15:55.620521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.688 [2024-07-15 21:15:55.620528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.688 [2024-07-15 21:15:55.620535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24992 len:8 PRP1 0x0 PRP2 0x0 00:25:39.689 [2024-07-15 21:15:55.620542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:55.620549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.689 [2024-07-15 21:15:55.620555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.689 [2024-07-15 21:15:55.620561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25000 len:8 PRP1 0x0 PRP2 0x0 00:25:39.689 [2024-07-15 21:15:55.620568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:55.620603] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1098b40 was disconnected and freed. reset controller. 00:25:39.689 [2024-07-15 21:15:55.620614] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:39.689 [2024-07-15 21:15:55.620622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:39.689 [2024-07-15 21:15:55.624134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:39.689 [2024-07-15 21:15:55.624160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106bfa0 (9): Bad file descriptor 00:25:39.689 [2024-07-15 21:15:55.660244] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:39.689 [2024-07-15 21:15:59.971352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.689 [2024-07-15 21:15:59.971389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.689 [2024-07-15 21:15:59.971415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.689 [2024-07-15 21:15:59.971432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.689 [2024-07-15 21:15:59.971450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.689 [2024-07-15 21:15:59.971737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.689 [2024-07-15 21:15:59.971754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.689 [2024-07-15 21:15:59.971770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.689 [2024-07-15 21:15:59.971787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.689 [2024-07-15 21:15:59.971803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.689 [2024-07-15 21:15:59.971820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.689 [2024-07-15 21:15:59.971836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.971987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.971998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.972005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.689 [2024-07-15 21:15:59.972014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.689 [2024-07-15 21:15:59.972021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.690 [2024-07-15 21:15:59.972376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.690 [2024-07-15 21:15:59.972403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35192 len:8 PRP1 0x0 PRP2 0x0 00:25:39.690 [2024-07-15 21:15:59.972410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.690 [2024-07-15 21:15:59.972426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.690 [2024-07-15 21:15:59.972432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35200 len:8 PRP1 0x0 PRP2 0x0 00:25:39.690 [2024-07-15 21:15:59.972439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.690 [2024-07-15 21:15:59.972452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.690 [2024-07-15 21:15:59.972458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35208 len:8 PRP1 0x0 PRP2 0x0 00:25:39.690 [2024-07-15 21:15:59.972465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.690 [2024-07-15 21:15:59.972479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.690 [2024-07-15 21:15:59.972485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35216 len:8 PRP1 0x0 PRP2 0x0 00:25:39.690 [2024-07-15 21:15:59.972493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.690 [2024-07-15 21:15:59.972507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.690 [2024-07-15 21:15:59.972513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35224 len:8 PRP1 0x0 PRP2 0x0 00:25:39.690 [2024-07-15 21:15:59.972520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.690 [2024-07-15 21:15:59.972533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.690 [2024-07-15 21:15:59.972540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35232 len:8 PRP1 0x0 PRP2 0x0 00:25:39.690 [2024-07-15 21:15:59.972547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.690 [2024-07-15 21:15:59.972560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.690 [2024-07-15 21:15:59.972566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35240 len:8 PRP1 0x0 PRP2 0x0 00:25:39.690 [2024-07-15 21:15:59.972573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.690 [2024-07-15 21:15:59.972581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.690 [2024-07-15 21:15:59.972588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.691 [2024-07-15 21:15:59.972594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35248 len:8 PRP1 0x0 PRP2 0x0 00:25:39.691 [2024-07-15 21:15:59.972602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.691 [2024-07-15 21:15:59.972609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.691 [2024-07-15 21:15:59.972614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.691 [2024-07-15 21:15:59.972621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35256 len:8 PRP1 0x0 PRP2 0x0 00:25:39.691 [2024-07-15 21:15:59.972628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.972636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.972641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.972647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35264 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.972654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.972662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.972667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.972673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35272 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.972680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.972688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.972693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.972699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35280 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.972706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.972714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.972719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.972725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35288 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.972732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.972740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.972746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.972752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35296 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.972759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.972766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.972771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.972777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35304 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.972785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.972794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.972799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.972805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35312 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.972812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.972819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.972825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.972831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35320 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.972838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.972846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.972851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.972857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35328 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.972864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.972871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.972876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.972883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35336 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.972889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.972898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.972904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.972910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35344 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.972917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.972924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.972930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.972936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35352 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.972944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.972951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.972957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.972963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35360 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.972970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.972978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.972983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.972990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35368 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.972999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.973006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.973012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.973017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35376 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.973025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.973032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.973038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.973044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35384 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.973051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.973058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.973064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.973070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35392 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.973077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.973085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.973090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.973096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35400 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.973103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.973111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.973117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.973123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35408 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.973130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.973139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.973145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.973151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35416 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.973158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.973166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.973173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.973179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35424 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.973186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.973194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.973203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.973210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35432 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.973217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.973227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.973236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.973242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35440 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.973251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.973259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.973266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.973273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35448 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.973282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.692 [2024-07-15 21:15:59.973290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.692 [2024-07-15 21:15:59.973296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.692 [2024-07-15 21:15:59.973302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35456 len:8 PRP1 0x0 PRP2 0x0 00:25:39.692 [2024-07-15 21:15:59.973310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.973317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.973323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.973329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35464 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.973336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.973345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.973350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.973357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35472 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.973364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.973372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.973377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.973383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35480 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.973390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.973398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.973403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.973409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35488 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.973416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.973425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.973430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.973440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35496 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.973447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.973455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.973461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.973467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35504 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.973474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.973482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.973487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.973493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35512 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.973500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.982903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.982937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.982950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35520 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.982964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.982978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.982987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.982998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35528 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.983011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.983023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.983033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.983044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35536 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.983056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.983068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.983078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.983089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35544 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.983101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.983114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.983121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.983127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35552 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.983139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.983147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.983152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.983158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35560 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.983165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.983173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.983179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.983185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35568 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.983192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.983199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.983205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.983211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35576 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.983218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.983225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.983237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.983243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35584 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.983251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.983259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.983264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.983270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35592 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.983276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.983284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.983290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.983296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35600 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.983303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.983311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.983316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.983322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35608 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.983330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.983338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.983343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.983351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35616 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.983358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.983366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.983372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.983379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35624 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.983387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.983394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.983400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.983406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35632 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.983413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.983421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.983427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.983433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35640 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.983440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.983448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.983453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.983459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35648 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.983466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.693 [2024-07-15 21:15:59.983474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.693 [2024-07-15 21:15:59.983479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.693 [2024-07-15 21:15:59.983485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35656 len:8 PRP1 0x0 PRP2 0x0 00:25:39.693 [2024-07-15 21:15:59.983493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.694 [2024-07-15 21:15:59.983500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.694 [2024-07-15 21:15:59.983506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.694 [2024-07-15 21:15:59.983511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35664 len:8 PRP1 0x0 PRP2 0x0 00:25:39.694 [2024-07-15 21:15:59.983518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.694 [2024-07-15 21:15:59.983525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.694 [2024-07-15 21:15:59.983533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.694 [2024-07-15 21:15:59.983539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35672 len:8 PRP1 0x0 PRP2 0x0 00:25:39.694 [2024-07-15 21:15:59.983545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.694 [2024-07-15 21:15:59.983552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.694 [2024-07-15 21:15:59.983559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.694 [2024-07-15 21:15:59.983565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35680 len:8 PRP1 0x0 PRP2 0x0 00:25:39.694 [2024-07-15 21:15:59.983573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.694 [2024-07-15 21:15:59.983581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.694 [2024-07-15 21:15:59.983586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.694 [2024-07-15 21:15:59.983592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35688 len:8 PRP1 0x0 PRP2 0x0 00:25:39.694 [2024-07-15 21:15:59.983599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.694 [2024-07-15 21:15:59.983606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.694 [2024-07-15 21:15:59.983612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.694 [2024-07-15 21:15:59.983618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35696 len:8 PRP1 0x0 PRP2 0x0 00:25:39.694 [2024-07-15 21:15:59.983625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.694 [2024-07-15 21:15:59.983632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.694 [2024-07-15 21:15:59.983638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.694 [2024-07-15 21:15:59.983644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35704 len:8 PRP1 0x0 PRP2 0x0 00:25:39.694 [2024-07-15 21:15:59.983652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.694 [2024-07-15 21:15:59.983659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.694 [2024-07-15 21:15:59.983665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.694 [2024-07-15 21:15:59.983671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35712 len:8 PRP1 0x0 PRP2 0x0 00:25:39.694 [2024-07-15 21:15:59.983677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.694 [2024-07-15 21:15:59.983684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.694 [2024-07-15 21:15:59.983690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.694 [2024-07-15 21:15:59.983696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35720 len:8 PRP1 0x0 PRP2 0x0 00:25:39.694 [2024-07-15 21:15:59.983703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.694 [2024-07-15 21:15:59.983712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.694 [2024-07-15 21:15:59.983717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.694 [2024-07-15 21:15:59.983723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35728 len:8 PRP1 0x0 PRP2 0x0 00:25:39.694 [2024-07-15 21:15:59.983730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.694 [2024-07-15 21:15:59.983738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.694 [2024-07-15 21:15:59.983743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.694 [2024-07-15 21:15:59.983749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35736 len:8 PRP1 0x0 PRP2 0x0 00:25:39.694 [2024-07-15 21:15:59.983756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.694 [2024-07-15 21:15:59.983799] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x109bcc0 was disconnected and freed. reset controller. 00:25:39.694 [2024-07-15 21:15:59.983809] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:39.694 [2024-07-15 21:15:59.983837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.694 [2024-07-15 21:15:59.983846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.694 [2024-07-15 21:15:59.983856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.694 [2024-07-15 21:15:59.983864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.694 [2024-07-15 21:15:59.983871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.694 [2024-07-15 21:15:59.983879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.694 [2024-07-15 21:15:59.983886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.694 [2024-07-15 21:15:59.983894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.694 [2024-07-15 21:15:59.983902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:39.694 [2024-07-15 21:15:59.983929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106bfa0 (9): Bad file descriptor 00:25:39.694 [2024-07-15 21:15:59.987471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:39.694 [2024-07-15 21:16:00.117977] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:39.694 00:25:39.694 Latency(us) 00:25:39.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.694 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:39.694 Verification LBA range: start 0x0 length 0x4000 00:25:39.694 NVMe0n1 : 15.01 11243.03 43.92 463.54 0.00 10905.89 778.24 18459.31 00:25:39.694 =================================================================================================================== 00:25:39.694 Total : 11243.03 43.92 463.54 0.00 10905.89 778.24 18459.31 00:25:39.694 Received shutdown signal, test time was about 15.000000 seconds 00:25:39.694 00:25:39.694 Latency(us) 00:25:39.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.694 =================================================================================================================== 00:25:39.694 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:39.694 21:16:06 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:39.694 21:16:06 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:39.694 21:16:06 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:39.694 21:16:06 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2088174 00:25:39.694 21:16:06 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2088174 /var/tmp/bdevperf.sock 00:25:39.694 21:16:06 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:39.694 21:16:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2088174 ']' 00:25:39.694 21:16:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:39.694 21:16:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:39.694 21:16:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:39.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:39.694 21:16:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:39.694 21:16:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:39.955 21:16:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:39.955 21:16:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:39.955 21:16:07 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:40.216 [2024-07-15 21:16:07.326737] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:40.216 21:16:07 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:40.216 [2024-07-15 21:16:07.491099] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:40.476 21:16:07 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:40.736 NVMe0n1 00:25:40.736 21:16:07 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:40.736 00:25:40.997 21:16:08 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:40.997 00:25:41.257 21:16:08 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:41.257 21:16:08 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:41.257 21:16:08 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:41.518 21:16:08 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:44.815 21:16:11 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:44.815 21:16:11 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:44.815 21:16:11 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2089324 00:25:44.815 21:16:11 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:44.815 21:16:11 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2089324 00:25:45.754 0 00:25:45.754 21:16:12 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:45.754 [2024-07-15 21:16:06.426663] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:25:45.754 [2024-07-15 21:16:06.426755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088174 ] 00:25:45.754 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.754 [2024-07-15 21:16:06.495854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.754 [2024-07-15 21:16:06.558665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.754 [2024-07-15 21:16:08.612369] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:45.754 [2024-07-15 21:16:08.612417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.754 [2024-07-15 21:16:08.612429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.754 [2024-07-15 21:16:08.612439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.754 [2024-07-15 21:16:08.612447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.754 [2024-07-15 21:16:08.612455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.754 [2024-07-15 21:16:08.612462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.754 [2024-07-15 21:16:08.612470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.754 [2024-07-15 21:16:08.612477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.754 [2024-07-15 21:16:08.612485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.754 [2024-07-15 21:16:08.612514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.754 [2024-07-15 21:16:08.612528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243dfa0 (9): Bad file descriptor 00:25:45.754 [2024-07-15 21:16:08.622097] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:45.754 Running I/O for 1 seconds... 00:25:45.754 00:25:45.754 Latency(us) 00:25:45.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.754 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:45.754 Verification LBA range: start 0x0 length 0x4000 00:25:45.754 NVMe0n1 : 1.01 11142.73 43.53 0.00 0.00 11433.57 2512.21 10158.08 00:25:45.754 =================================================================================================================== 00:25:45.754 Total : 11142.73 43.53 0.00 0.00 11433.57 2512.21 10158.08 00:25:45.754 21:16:12 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:45.754 21:16:12 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:46.014 21:16:13 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:46.014 21:16:13 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:46.014 21:16:13 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:46.273 21:16:13 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:46.532 21:16:13 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:49.825 21:16:16 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:49.825 21:16:16 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:49.825 21:16:16 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2088174 00:25:49.825 21:16:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2088174 ']' 00:25:49.825 21:16:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2088174 00:25:49.825 21:16:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:49.825 21:16:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:49.825 21:16:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2088174 00:25:49.825 21:16:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:49.825 21:16:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:49.825 21:16:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2088174' 00:25:49.825 killing process with pid 2088174 00:25:49.825 21:16:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2088174 00:25:49.825 21:16:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2088174 00:25:49.825 21:16:16 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:49.825 21:16:16 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:50.085 21:16:17 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:50.085 21:16:17 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:50.085 21:16:17 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:50.085 21:16:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:50.085 21:16:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:50.085 21:16:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:50.085 21:16:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:50.085 21:16:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:50.085 21:16:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:50.085 rmmod nvme_tcp 00:25:50.085 rmmod nvme_fabrics 00:25:50.085 rmmod nvme_keyring 00:25:50.085 21:16:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:50.085 21:16:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:50.085 21:16:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:50.086 21:16:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2084597 ']' 00:25:50.086 21:16:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2084597 00:25:50.086 21:16:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2084597 ']' 00:25:50.086 21:16:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2084597 00:25:50.086 21:16:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:50.086 21:16:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:50.086 21:16:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2084597 00:25:50.086 21:16:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:50.086 21:16:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:50.086 21:16:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2084597' 00:25:50.086 killing process with pid 2084597 00:25:50.086 21:16:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2084597 00:25:50.086 21:16:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2084597 00:25:50.345 21:16:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:50.345 21:16:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:50.345 21:16:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:50.345 21:16:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:50.345 21:16:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:50.345 21:16:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.345 21:16:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.345 21:16:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.254 21:16:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:52.254 00:25:52.254 real 0m40.103s 00:25:52.254 user 2m1.261s 00:25:52.254 sys 0m8.685s 00:25:52.254 21:16:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:52.254 21:16:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:52.254 ************************************ 00:25:52.254 END TEST nvmf_failover 00:25:52.254 ************************************ 00:25:52.254 21:16:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:52.254 21:16:19 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:52.254 21:16:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:52.254 21:16:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:52.254 21:16:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:52.254 ************************************ 00:25:52.254 START TEST nvmf_host_discovery 00:25:52.254 ************************************ 00:25:52.254 21:16:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:52.515 * Looking for test storage... 00:25:52.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:52.515 21:16:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:00.650 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:00.650 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:00.650 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:00.651 Found net devices under 0000:31:00.0: cvl_0_0 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:00.651 Found net devices under 0000:31:00.1: cvl_0_1 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:00.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:00.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:26:00.651 00:26:00.651 --- 10.0.0.2 ping statistics --- 00:26:00.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.651 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:00.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:00.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:26:00.651 00:26:00.651 --- 10.0.0.1 ping statistics --- 00:26:00.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.651 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2094908 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2094908 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2094908 ']' 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:00.651 21:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.651 [2024-07-15 21:16:27.563728] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:26:00.651 [2024-07-15 21:16:27.563794] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.651 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.651 [2024-07-15 21:16:27.659774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.651 [2024-07-15 21:16:27.758075] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.651 [2024-07-15 21:16:27.758135] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.651 [2024-07-15 21:16:27.758144] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.651 [2024-07-15 21:16:27.758151] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.651 [2024-07-15 21:16:27.758157] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.651 [2024-07-15 21:16:27.758194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.221 [2024-07-15 21:16:28.394241] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.221 [2024-07-15 21:16:28.406500] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.221 null0 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.221 null1 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2095167 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2095167 /tmp/host.sock 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2095167 ']' 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:01.221 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:01.221 21:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.221 [2024-07-15 21:16:28.502575] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:26:01.221 [2024-07-15 21:16:28.502642] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2095167 ] 00:26:01.481 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.481 [2024-07-15 21:16:28.573495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.481 [2024-07-15 21:16:28.648034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.052 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:02.052 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:02.053 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:02.053 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:02.053 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.053 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.053 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.053 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:02.053 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.053 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.053 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.053 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:02.053 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:02.053 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:02.053 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.053 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:02.053 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.053 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:02.053 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:02.053 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.314 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.576 [2024-07-15 21:16:29.641593] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:02.576 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:02.577 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:02.577 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:02.577 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.577 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.577 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.577 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:02.577 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:02.577 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:02.577 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:02.577 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:02.577 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:02.577 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:02.577 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:02.577 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.577 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:02.577 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.577 21:16:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:02.577 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.577 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:26:02.577 21:16:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:03.148 [2024-07-15 21:16:30.312050] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:03.148 [2024-07-15 21:16:30.312074] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:03.148 [2024-07-15 21:16:30.312088] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:03.148 [2024-07-15 21:16:30.399360] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:03.408 [2024-07-15 21:16:30.586577] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:03.408 [2024-07-15 21:16:30.586600] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.668 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:03.929 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:03.929 21:16:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:03.929 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:03.929 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:03.929 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:03.929 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:03.929 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:03.929 21:16:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:03.929 21:16:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:03.929 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.929 21:16:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:03.929 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.929 21:16:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:03.929 21:16:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:03.929 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.930 [2024-07-15 21:16:31.181536] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:03.930 [2024-07-15 21:16:31.181877] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:03.930 [2024-07-15 21:16:31.181902] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:03.930 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:04.191 [2024-07-15 21:16:31.268589] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:04.191 21:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:04.191 [2024-07-15 21:16:31.368268] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:04.191 [2024-07-15 21:16:31.368286] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:04.191 [2024-07-15 21:16:31.368292] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:05.130 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:05.130 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:05.130 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:05.130 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:05.130 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:05.130 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:05.130 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.130 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:05.130 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.130 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.130 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:05.130 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:05.130 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:05.130 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:05.130 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:05.130 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:05.131 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:05.131 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:05.131 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:05.131 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:05.131 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:05.131 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.131 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.131 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:05.131 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.392 [2024-07-15 21:16:32.449611] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:05.392 [2024-07-15 21:16:32.449632] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:05.392 [2024-07-15 21:16:32.458068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.392 [2024-07-15 21:16:32.458087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.392 [2024-07-15 21:16:32.458096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.392 [2024-07-15 21:16:32.458103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.392 [2024-07-15 21:16:32.458110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.392 [2024-07-15 21:16:32.458118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.392 [2024-07-15 21:16:32.458125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.392 [2024-07-15 21:16:32.458132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.392 [2024-07-15 21:16:32.458139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x652aa0 is same with the state(5) to be set 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:05.392 [2024-07-15 21:16:32.468083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x652aa0 (9): Bad file descriptor 00:26:05.392 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.392 [2024-07-15 21:16:32.478121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.392 [2024-07-15 21:16:32.478329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.392 [2024-07-15 21:16:32.478344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x652aa0 with addr=10.0.0.2, port=4420 00:26:05.392 [2024-07-15 21:16:32.478352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x652aa0 is same with the state(5) to be set 00:26:05.392 [2024-07-15 21:16:32.478363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x652aa0 (9): Bad file descriptor 00:26:05.392 [2024-07-15 21:16:32.478374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.392 [2024-07-15 21:16:32.478380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.392 [2024-07-15 21:16:32.478388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.392 [2024-07-15 21:16:32.478400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.392 [2024-07-15 21:16:32.488178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.392 [2024-07-15 21:16:32.488519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.392 [2024-07-15 21:16:32.488531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x652aa0 with addr=10.0.0.2, port=4420 00:26:05.392 [2024-07-15 21:16:32.488538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x652aa0 is same with the state(5) to be set 00:26:05.392 [2024-07-15 21:16:32.488553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x652aa0 (9): Bad file descriptor 00:26:05.392 [2024-07-15 21:16:32.488563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.393 [2024-07-15 21:16:32.488569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.393 [2024-07-15 21:16:32.488576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.393 [2024-07-15 21:16:32.488586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.393 [2024-07-15 21:16:32.498234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.393 [2024-07-15 21:16:32.498627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.393 [2024-07-15 21:16:32.498639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x652aa0 with addr=10.0.0.2, port=4420 00:26:05.393 [2024-07-15 21:16:32.498646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x652aa0 is same with the state(5) to be set 00:26:05.393 [2024-07-15 21:16:32.498656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x652aa0 (9): Bad file descriptor 00:26:05.393 [2024-07-15 21:16:32.498666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.393 [2024-07-15 21:16:32.498672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.393 [2024-07-15 21:16:32.498679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.393 [2024-07-15 21:16:32.498690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:05.393 [2024-07-15 21:16:32.508287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:05.393 [2024-07-15 21:16:32.508632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.393 [2024-07-15 21:16:32.508645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x652aa0 with addr=10.0.0.2, port=4420 00:26:05.393 [2024-07-15 21:16:32.508652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x652aa0 is same with the state(5) to be set 00:26:05.393 [2024-07-15 21:16:32.508663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x652aa0 (9): Bad file descriptor 00:26:05.393 [2024-07-15 21:16:32.508673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.393 [2024-07-15 21:16:32.508680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.393 [2024-07-15 21:16:32.508686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.393 [2024-07-15 21:16:32.508697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:05.393 [2024-07-15 21:16:32.518339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.393 [2024-07-15 21:16:32.518782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.393 [2024-07-15 21:16:32.518793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x652aa0 with addr=10.0.0.2, port=4420 00:26:05.393 [2024-07-15 21:16:32.518800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x652aa0 is same with the state(5) to be set 00:26:05.393 [2024-07-15 21:16:32.518811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x652aa0 (9): Bad file descriptor 00:26:05.393 [2024-07-15 21:16:32.518821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.393 [2024-07-15 21:16:32.518827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.393 [2024-07-15 21:16:32.518834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.393 [2024-07-15 21:16:32.518851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.393 [2024-07-15 21:16:32.528390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.393 [2024-07-15 21:16:32.528644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.393 [2024-07-15 21:16:32.528656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x652aa0 with addr=10.0.0.2, port=4420 00:26:05.393 [2024-07-15 21:16:32.528664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x652aa0 is same with the state(5) to be set 00:26:05.393 [2024-07-15 21:16:32.528676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x652aa0 (9): Bad file descriptor 00:26:05.393 [2024-07-15 21:16:32.528687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.393 [2024-07-15 21:16:32.528694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.393 [2024-07-15 21:16:32.528702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.393 [2024-07-15 21:16:32.528712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.393 [2024-07-15 21:16:32.538444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.393 [2024-07-15 21:16:32.538783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.393 [2024-07-15 21:16:32.538795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x652aa0 with addr=10.0.0.2, port=4420 00:26:05.393 [2024-07-15 21:16:32.538802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x652aa0 is same with the state(5) to be set 00:26:05.393 [2024-07-15 21:16:32.538813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x652aa0 (9): Bad file descriptor 00:26:05.393 [2024-07-15 21:16:32.538822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.393 [2024-07-15 21:16:32.538828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.393 [2024-07-15 21:16:32.538835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.393 [2024-07-15 21:16:32.538845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.393 [2024-07-15 21:16:32.548497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.393 [2024-07-15 21:16:32.548880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.393 [2024-07-15 21:16:32.548891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x652aa0 with addr=10.0.0.2, port=4420 00:26:05.393 [2024-07-15 21:16:32.548898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x652aa0 is same with the state(5) to be set 00:26:05.393 [2024-07-15 21:16:32.548909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x652aa0 (9): Bad file descriptor 00:26:05.393 [2024-07-15 21:16:32.548918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.393 [2024-07-15 21:16:32.548924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.393 [2024-07-15 21:16:32.548931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.393 [2024-07-15 21:16:32.548941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.393 [2024-07-15 21:16:32.558549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.393 [2024-07-15 21:16:32.558887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.393 [2024-07-15 21:16:32.558898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x652aa0 with addr=10.0.0.2, port=4420 00:26:05.393 [2024-07-15 21:16:32.558905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x652aa0 is same with the state(5) to be set 00:26:05.393 [2024-07-15 21:16:32.558916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x652aa0 (9): Bad file descriptor 00:26:05.393 [2024-07-15 21:16:32.558925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.393 [2024-07-15 21:16:32.558931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.393 [2024-07-15 21:16:32.558938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.393 [2024-07-15 21:16:32.558948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:05.393 [2024-07-15 21:16:32.568601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.393 [2024-07-15 21:16:32.568945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.393 [2024-07-15 21:16:32.568957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x652aa0 with addr=10.0.0.2, port=4420 00:26:05.393 [2024-07-15 21:16:32.568966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x652aa0 is same with the state(5) to be set 00:26:05.393 [2024-07-15 21:16:32.568977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x652aa0 (9): Bad file descriptor 00:26:05.393 [2024-07-15 21:16:32.568987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.393 [2024-07-15 21:16:32.568994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.393 [2024-07-15 21:16:32.569002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.393 [2024-07-15 21:16:32.569014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.393 21:16:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:05.394 [2024-07-15 21:16:32.577342] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:05.394 [2024-07-15 21:16:32.577362] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:05.394 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.394 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:26:05.394 21:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:06.332 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:06.332 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:06.332 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:06.332 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:06.332 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:06.332 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:06.332 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.332 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:06.332 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.332 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:06.592 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.593 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.852 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:06.852 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:06.852 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:06.852 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:06.852 21:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:06.852 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.852 21:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:07.790 [2024-07-15 21:16:34.934391] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:07.790 [2024-07-15 21:16:34.934410] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:07.790 [2024-07-15 21:16:34.934423] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:07.790 [2024-07-15 21:16:35.021703] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:08.051 [2024-07-15 21:16:35.291206] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:08.051 [2024-07-15 21:16:35.291243] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.051 request: 00:26:08.051 { 00:26:08.051 "name": "nvme", 00:26:08.051 "trtype": "tcp", 00:26:08.051 "traddr": "10.0.0.2", 00:26:08.051 "adrfam": "ipv4", 00:26:08.051 "trsvcid": "8009", 00:26:08.051 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:08.051 "wait_for_attach": true, 00:26:08.051 "method": "bdev_nvme_start_discovery", 00:26:08.051 "req_id": 1 00:26:08.051 } 00:26:08.051 Got JSON-RPC error response 00:26:08.051 response: 00:26:08.051 { 00:26:08.051 "code": -17, 00:26:08.051 "message": "File exists" 00:26:08.051 } 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:08.051 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.312 request: 00:26:08.312 { 00:26:08.312 "name": "nvme_second", 00:26:08.312 "trtype": "tcp", 00:26:08.312 "traddr": "10.0.0.2", 00:26:08.312 "adrfam": "ipv4", 00:26:08.312 "trsvcid": "8009", 00:26:08.312 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:08.312 "wait_for_attach": true, 00:26:08.312 "method": "bdev_nvme_start_discovery", 00:26:08.312 "req_id": 1 00:26:08.312 } 00:26:08.312 Got JSON-RPC error response 00:26:08.312 response: 00:26:08.312 { 00:26:08.312 "code": -17, 00:26:08.312 "message": "File exists" 00:26:08.312 } 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.312 21:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.697 [2024-07-15 21:16:36.562744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-07-15 21:16:36.562773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8304c0 with addr=10.0.0.2, port=8010 00:26:09.697 [2024-07-15 21:16:36.562786] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:09.697 [2024-07-15 21:16:36.562793] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:09.697 [2024-07-15 21:16:36.562801] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:10.638 [2024-07-15 21:16:37.565124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.638 [2024-07-15 21:16:37.565147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8304c0 with addr=10.0.0.2, port=8010 00:26:10.638 [2024-07-15 21:16:37.565162] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:10.638 [2024-07-15 21:16:37.565169] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:10.638 [2024-07-15 21:16:37.565175] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:11.580 [2024-07-15 21:16:38.567075] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:11.580 request: 00:26:11.580 { 00:26:11.580 "name": "nvme_second", 00:26:11.580 "trtype": "tcp", 00:26:11.580 "traddr": "10.0.0.2", 00:26:11.580 "adrfam": "ipv4", 00:26:11.580 "trsvcid": "8010", 00:26:11.580 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:11.580 "wait_for_attach": false, 00:26:11.580 "attach_timeout_ms": 3000, 00:26:11.580 "method": "bdev_nvme_start_discovery", 00:26:11.580 "req_id": 1 00:26:11.580 } 00:26:11.580 Got JSON-RPC error response 00:26:11.580 response: 00:26:11.580 { 00:26:11.580 "code": -110, 00:26:11.580 "message": "Connection timed out" 00:26:11.580 } 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2095167 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:11.580 rmmod nvme_tcp 00:26:11.580 rmmod nvme_fabrics 00:26:11.580 rmmod nvme_keyring 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2094908 ']' 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2094908 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2094908 ']' 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2094908 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2094908 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2094908' 00:26:11.580 killing process with pid 2094908 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2094908 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2094908 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:11.580 21:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.125 21:16:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:14.125 00:26:14.125 real 0m21.396s 00:26:14.125 user 0m25.313s 00:26:14.125 sys 0m7.384s 00:26:14.125 21:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:14.125 21:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.125 ************************************ 00:26:14.125 END TEST nvmf_host_discovery 00:26:14.125 ************************************ 00:26:14.125 21:16:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:14.125 21:16:40 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:14.125 21:16:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:14.125 21:16:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:14.125 21:16:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:14.125 ************************************ 00:26:14.125 START TEST nvmf_host_multipath_status 00:26:14.125 ************************************ 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:14.125 * Looking for test storage... 00:26:14.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:14.125 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:14.126 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:14.126 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:14.126 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.126 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:14.126 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:14.126 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:14.126 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.126 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:14.126 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.126 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:14.126 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:14.126 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:14.126 21:16:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:22.401 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:22.401 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:22.401 Found net devices under 0000:31:00.0: cvl_0_0 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:22.401 Found net devices under 0000:31:00.1: cvl_0_1 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:22.401 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:22.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:22.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:26:22.402 00:26:22.402 --- 10.0.0.2 ping statistics --- 00:26:22.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.402 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:22.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:22.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:26:22.402 00:26:22.402 --- 10.0.0.1 ping statistics --- 00:26:22.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.402 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2101838 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2101838 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2101838 ']' 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:22.402 21:16:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:22.402 [2024-07-15 21:16:48.935047] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:26:22.402 [2024-07-15 21:16:48.935114] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.402 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.402 [2024-07-15 21:16:49.014424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:22.402 [2024-07-15 21:16:49.088553] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:22.402 [2024-07-15 21:16:49.088592] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:22.402 [2024-07-15 21:16:49.088600] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:22.402 [2024-07-15 21:16:49.088606] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:22.402 [2024-07-15 21:16:49.088612] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:22.402 [2024-07-15 21:16:49.088750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.402 [2024-07-15 21:16:49.088752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.663 21:16:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:22.663 21:16:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:22.663 21:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:22.663 21:16:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:22.663 21:16:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:22.663 21:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:22.663 21:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2101838 00:26:22.663 21:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:22.663 [2024-07-15 21:16:49.880314] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.663 21:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:22.923 Malloc0 00:26:22.923 21:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:23.184 21:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:23.184 21:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:23.445 [2024-07-15 21:16:50.506160] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.445 21:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:23.445 [2024-07-15 21:16:50.662512] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:23.445 21:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2102208 00:26:23.445 21:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:23.445 21:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:23.445 21:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2102208 /var/tmp/bdevperf.sock 00:26:23.445 21:16:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2102208 ']' 00:26:23.445 21:16:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:23.445 21:16:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:23.445 21:16:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:23.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:23.445 21:16:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:23.445 21:16:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:24.391 21:16:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:24.391 21:16:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:24.391 21:16:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:24.391 21:16:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:24.962 Nvme0n1 00:26:24.962 21:16:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:25.222 Nvme0n1 00:26:25.222 21:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:25.222 21:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:27.133 21:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:27.133 21:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:27.393 21:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:27.653 21:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:28.590 21:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:28.590 21:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:28.590 21:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.590 21:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:28.849 21:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.849 21:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:28.849 21:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.849 21:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:28.850 21:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.850 21:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:28.850 21:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.850 21:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:29.109 21:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.109 21:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:29.109 21:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.109 21:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:29.369 21:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.369 21:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:29.369 21:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.369 21:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:29.369 21:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.369 21:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:29.369 21:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.369 21:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:29.628 21:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.628 21:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:29.628 21:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:29.887 21:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:29.887 21:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:31.266 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:31.266 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:31.266 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.266 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:31.266 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:31.266 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:31.266 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.266 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:31.266 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.266 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:31.266 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.266 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:31.526 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.526 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:31.526 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.526 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:31.526 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.526 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:31.784 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.784 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:31.784 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.784 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:31.784 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:31.784 21:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.044 21:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.044 21:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:32.044 21:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:32.044 21:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:32.303 21:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:33.242 21:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:33.242 21:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:33.243 21:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.243 21:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:33.503 21:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.503 21:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:33.503 21:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.503 21:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:33.764 21:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:33.764 21:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:33.764 21:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.764 21:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:33.764 21:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.764 21:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:33.764 21:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.764 21:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:34.025 21:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.025 21:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:34.025 21:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.025 21:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:34.025 21:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.025 21:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:34.025 21:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.025 21:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:34.286 21:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.286 21:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:34.286 21:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:34.546 21:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:34.546 21:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:35.931 21:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:35.931 21:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:35.931 21:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.931 21:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:35.931 21:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.931 21:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:35.931 21:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.931 21:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:35.931 21:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:35.931 21:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:35.931 21:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.931 21:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:36.191 21:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.191 21:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:36.192 21:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.192 21:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:36.452 21:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.452 21:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:36.452 21:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.452 21:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:36.452 21:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.452 21:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:36.452 21:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.452 21:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:36.712 21:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:36.712 21:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:36.712 21:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:36.974 21:17:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:36.974 21:17:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:37.917 21:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:37.917 21:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:37.917 21:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.917 21:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:38.178 21:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:38.178 21:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:38.178 21:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.178 21:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:38.440 21:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:38.440 21:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:38.440 21:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.440 21:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:38.440 21:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.440 21:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:38.440 21:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.440 21:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:38.701 21:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.701 21:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:38.701 21:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.701 21:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:38.963 21:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:38.963 21:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:38.963 21:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.963 21:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:38.963 21:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:38.963 21:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:38.963 21:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:39.224 21:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:39.485 21:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:40.429 21:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:40.429 21:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:40.429 21:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.429 21:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:40.429 21:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:40.429 21:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:40.429 21:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.429 21:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:40.690 21:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.690 21:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:40.690 21:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.690 21:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:40.951 21:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.951 21:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:40.951 21:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.951 21:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:40.951 21:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.951 21:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:40.951 21:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.951 21:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:41.213 21:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:41.213 21:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:41.213 21:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.213 21:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:41.474 21:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.474 21:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:41.474 21:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:41.474 21:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:41.735 21:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:41.996 21:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:42.938 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:42.938 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:42.938 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.938 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:42.938 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.938 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:42.939 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.939 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:43.200 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.200 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:43.200 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.200 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:43.460 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.460 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:43.460 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.460 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:43.460 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.460 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:43.460 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.460 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:43.721 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.721 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:43.721 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.721 21:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:43.982 21:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.982 21:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:43.982 21:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:43.982 21:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:44.242 21:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:45.182 21:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:45.182 21:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:45.182 21:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.182 21:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:45.443 21:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:45.443 21:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:45.443 21:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.443 21:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:45.703 21:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.703 21:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:45.703 21:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.703 21:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:45.703 21:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.703 21:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:45.703 21:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.703 21:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:45.963 21:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.963 21:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:45.963 21:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.963 21:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:46.223 21:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.223 21:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:46.223 21:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.223 21:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:46.223 21:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.223 21:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:46.223 21:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:46.482 21:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:46.482 21:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:47.861 21:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:47.861 21:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:47.861 21:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.862 21:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:47.862 21:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.862 21:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:47.862 21:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.862 21:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:47.862 21:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.862 21:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:47.862 21:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.862 21:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:48.123 21:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.123 21:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:48.123 21:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.123 21:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:48.384 21:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.384 21:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:48.384 21:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:48.384 21:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.385 21:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.385 21:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:48.385 21:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:48.385 21:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.645 21:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.645 21:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:48.645 21:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:48.905 21:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:48.905 21:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:49.847 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:49.847 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:49.847 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.847 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:50.107 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.107 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:50.107 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.107 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:50.367 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:50.367 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:50.367 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.367 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:50.367 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.367 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:50.367 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.367 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:50.628 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.628 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:50.628 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.628 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:50.888 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.888 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:50.888 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.888 21:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:50.888 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:50.888 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2102208 00:26:50.888 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2102208 ']' 00:26:50.888 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2102208 00:26:50.888 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:50.888 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:50.888 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2102208 00:26:51.212 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:51.212 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:51.212 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2102208' 00:26:51.212 killing process with pid 2102208 00:26:51.212 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2102208 00:26:51.212 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2102208 00:26:51.212 Connection closed with partial response: 00:26:51.212 00:26:51.212 00:26:51.212 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2102208 00:26:51.212 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:51.212 [2024-07-15 21:16:50.736944] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:26:51.212 [2024-07-15 21:16:50.737003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2102208 ] 00:26:51.212 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.212 [2024-07-15 21:16:50.793714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.212 [2024-07-15 21:16:50.845668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.213 Running I/O for 90 seconds... 00:26:51.213 [2024-07-15 21:17:03.999056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.213 [2024-07-15 21:17:03.999090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:03.999108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:03.999114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:03.999125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:03.999130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:03.999141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:03.999146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:03.999157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:03.999162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:03.999172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:03.999177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:03.999187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:03.999192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:03.999202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:03.999207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:03.999217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:03.999222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:03.999239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:03.999244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:03.999254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:03.999264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:03.999275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:03.999279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:03.999290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:03.999295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:03.999305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:03.999310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:03.999320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:03.999325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:03.999335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:03.999340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:03.999350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:03.999355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:04.000811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:04.000819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:04.000830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:04.000836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:04.000846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:04.000851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:04.000861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:04.000866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:04.000876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:04.000881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:04.000891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:04.000896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:04.000908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:04.000914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:04.000923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:04.000928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:04.000938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:04.000944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:04.000954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:04.000959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:04.000969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:04.000974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:04.000984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:04.000989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:04.000999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:04.001005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:04.001015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:04.001020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:04.001030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:04.001035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:04.001045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-07-15 21:17:04.001050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:51.213 [2024-07-15 21:17:04.001061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-07-15 21:17:04.001706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:51.214 [2024-07-15 21:17:04.001717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.001722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.001732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.001737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.001747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.001752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.001762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.001767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.001777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.001782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.001793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.001798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.001808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.001813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.001823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.001828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.001838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.001844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.001854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.001859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.001869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.001873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.001883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.001888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.001898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.001904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.001915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.001920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.002275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.002284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.002295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.002300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.002311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.002316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.002329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.002334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.002344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.002349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.002359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-07-15 21:17:04.002365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.002375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-07-15 21:17:04.002380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.002390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-07-15 21:17:04.002395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.002405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-07-15 21:17:04.002410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.002421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-07-15 21:17:04.002425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.002436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-07-15 21:17:04.002441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.002451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-07-15 21:17:04.002456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.002466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.002471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.002482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.002487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.002497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.002502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.002512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.002518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.002528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.002533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.002543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.002548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.002558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-07-15 21:17:04.002563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:51.215 [2024-07-15 21:17:04.002573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.002578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.002588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.002593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.002603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.002608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.002618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.002623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.002634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.002639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.002649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.002654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.002663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.002668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.002678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.002683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.002693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.002699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.002709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.002714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.002725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.002730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.002958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.002966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.002977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.002982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.002992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.002997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.216 [2024-07-15 21:17:04.003227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:51.216 [2024-07-15 21:17:04.003334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-07-15 21:17:04.003339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.003985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.003995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.004002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.004012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.004017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.004028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.004033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.004135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.004142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.004152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.004158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.004168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.004173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:51.217 [2024-07-15 21:17:04.004183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.217 [2024-07-15 21:17:04.004188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.004198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.004203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.004213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.004218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.004228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.004238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.004248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.004253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.004533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.004539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.004550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.004555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.004567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.004572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.004582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.004587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.004597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.004602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.004612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.004617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.004627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.004632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.004643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.004648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.004841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.004848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.004859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.004864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.004874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.004879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.004889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.004894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.004904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.004909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.004920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.004925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.004936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.004941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.004951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.004956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.005110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.005117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.005128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.005133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.005143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.005148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.005158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.005163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.005173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.005179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.005189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.005194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.005204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.005209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.016046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.016068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:51.218 [2024-07-15 21:17:04.016202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.218 [2024-07-15 21:17:04.016212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.219 [2024-07-15 21:17:04.016561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.219 [2024-07-15 21:17:04.016577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.219 [2024-07-15 21:17:04.016592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.219 [2024-07-15 21:17:04.016606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.219 [2024-07-15 21:17:04.016622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.219 [2024-07-15 21:17:04.016640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.219 [2024-07-15 21:17:04.016655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.219 [2024-07-15 21:17:04.016716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:51.219 [2024-07-15 21:17:04.016727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.016732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.016742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.016747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.016757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.016762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.016772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.016777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.016786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.016792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.016802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.016807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.016818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.016824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.016834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.016838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.016848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.016854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.016864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.016869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.016878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.016884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.016893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.016898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.016908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.016913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.016923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.016928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.016938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.016944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.016954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.016959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.016969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.016974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.016984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.016989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.016999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.017005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.017015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.017020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.017030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.017035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.017045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.017050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.017060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.017065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.017075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.017080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.017090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.017095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.017105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.017109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.017119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.017124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.017134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.017139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.017149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.017154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.017164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.017169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.017179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.017185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.017195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.220 [2024-07-15 21:17:04.017200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:51.220 [2024-07-15 21:17:04.017210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.220 [2024-07-15 21:17:04.017215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:51.221 [2024-07-15 21:17:04.017681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.221 [2024-07-15 21:17:04.017686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.017701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.017716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.017731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.017747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.017762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.017777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.017791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.017807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.017821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.017836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.017851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.017866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.017881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.017896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.017911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.017927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.017942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.017957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.017972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.017987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.017997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.018002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.018012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.018017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.018876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.018889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.018901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.018906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.018916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.018921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.018931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.018936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.018946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.018951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.018961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.018966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.018978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.018983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.018994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.018999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.019500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.019507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.019519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.019524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.019534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.019539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.019549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.222 [2024-07-15 21:17:04.019554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:51.222 [2024-07-15 21:17:04.019564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.019569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.019579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.019584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.019594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.019598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.019608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.019613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.019623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.019628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.019638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.019643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.019655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.019660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.019670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.019675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.019685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.019690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.019700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.019705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.019715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.019719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.019730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.019734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.019745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.019750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.019886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.019892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.019904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.019909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.019919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.019924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.019934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.019939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.019949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.019954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.019964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.019970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.019980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.223 [2024-07-15 21:17:04.019985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.019995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.223 [2024-07-15 21:17:04.020000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.020011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.223 [2024-07-15 21:17:04.020015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.020026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.223 [2024-07-15 21:17:04.020030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.020041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.223 [2024-07-15 21:17:04.020046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.020056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.223 [2024-07-15 21:17:04.020060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.020071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.223 [2024-07-15 21:17:04.020076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.020086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.020090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.020100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.020106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:51.223 [2024-07-15 21:17:04.020116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.223 [2024-07-15 21:17:04.020121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.020130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.020135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.020145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.020152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.224 [2024-07-15 21:17:04.027932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.224 [2024-07-15 21:17:04.027948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:51.224 [2024-07-15 21:17:04.027958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.027964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.027974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.027979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.027989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.027994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.225 [2024-07-15 21:17:04.028441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:51.225 [2024-07-15 21:17:04.028451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:51.226 [2024-07-15 21:17:04.028934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.226 [2024-07-15 21:17:04.028940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.028950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.028956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.028967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.028972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.028982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.028987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.028997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.029002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.029012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.029017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.029027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.029031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.029041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.029046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.029056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.029061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.029071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.029076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.029086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.029091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.029101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.029106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.029116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.029121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.029901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.029913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.029928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.029933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.029943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.029948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.029958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.029963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.029974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.029979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.029990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.029994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.030004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.030009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.030019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.227 [2024-07-15 21:17:04.030024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.030035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.227 [2024-07-15 21:17:04.030040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.030050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.227 [2024-07-15 21:17:04.030055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.030065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.227 [2024-07-15 21:17:04.030070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.030080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.227 [2024-07-15 21:17:04.030085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.030095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.227 [2024-07-15 21:17:04.030101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.030112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.227 [2024-07-15 21:17:04.030117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.030127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.030132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.030144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.030149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.030160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.030164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.030174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.030179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.030189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.030194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:51.227 [2024-07-15 21:17:04.030204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.227 [2024-07-15 21:17:04.030209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.030219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.030224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.030706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.030714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.030727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.030733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.030744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.030750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.030761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.030767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.030777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.030784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.030795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.030800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.030811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.030816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.030826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.030830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.030840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.030846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.030856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.030860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.030870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.030875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.030885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.030891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.030901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.030907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.030918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.030923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.030934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.030938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.030949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.030954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.030964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.030971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.031107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.031113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.031124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.031129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.031139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.031144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.031154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.031159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.031169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.031174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.031185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.031190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.031200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.031205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.031215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.228 [2024-07-15 21:17:04.031220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:51.228 [2024-07-15 21:17:04.031233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.229 [2024-07-15 21:17:04.031284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.031873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.031878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.032007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.032013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:51.229 [2024-07-15 21:17:04.032024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.229 [2024-07-15 21:17:04.032029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.032977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.032981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.033055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.033061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.033072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.033077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.033088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.033094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.033104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.230 [2024-07-15 21:17:04.033109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:51.230 [2024-07-15 21:17:04.033119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.033124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.033133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.033138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.033148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.033153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.038466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.038600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.038617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.038632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.038647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.038662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.038677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.038697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.038713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.038728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.038743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.038758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.038773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.038787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.038802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.038817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.038832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.038847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.038862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.231 [2024-07-15 21:17:04.038878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.231 [2024-07-15 21:17:04.038893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.231 [2024-07-15 21:17:04.038909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.231 [2024-07-15 21:17:04.038924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.231 [2024-07-15 21:17:04.038939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.231 [2024-07-15 21:17:04.038954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.231 [2024-07-15 21:17:04.038969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.038984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.038995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.039000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.039010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.039015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.039025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.039030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.039040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.039045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.039055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.039061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:51.231 [2024-07-15 21:17:04.039073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.231 [2024-07-15 21:17:04.039078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.232 [2024-07-15 21:17:04.039531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.232 [2024-07-15 21:17:04.039575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:51.232 [2024-07-15 21:17:04.039585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:51.233 [2024-07-15 21:17:04.039990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.233 [2024-07-15 21:17:04.039995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.040399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.040403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.041266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.041279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.041291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.041296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.041306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.041312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.041322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.041327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.041337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.041342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.041351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.041356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.041366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.041371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.041382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.234 [2024-07-15 21:17:04.041386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:51.234 [2024-07-15 21:17:04.041917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.041924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.041935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.041940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.041952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.041957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.041968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.041972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.041982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.041987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.041997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.235 [2024-07-15 21:17:04.042357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.235 [2024-07-15 21:17:04.042372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.235 [2024-07-15 21:17:04.042388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.235 [2024-07-15 21:17:04.042403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.235 [2024-07-15 21:17:04.042418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.235 [2024-07-15 21:17:04.042433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.235 [2024-07-15 21:17:04.042448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.235 [2024-07-15 21:17:04.042574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:51.235 [2024-07-15 21:17:04.042584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.042588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.042598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.042603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.042613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.042618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.042628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.042634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.042798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.042807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.042818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.042823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.042833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.042838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.042848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.042853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.042863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.042868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.042878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.042883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.042893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.042898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.042908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.042913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.042987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.042993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.236 [2024-07-15 21:17:04.043294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.236 [2024-07-15 21:17:04.043473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:51.236 [2024-07-15 21:17:04.043483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.043488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.043498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.043503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.043788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.043794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.043805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.043810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.043820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.043825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.043835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.043841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.043851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.043859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.043869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.043874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.043885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.043890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.043900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.043905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.043978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.043985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.043996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.237 [2024-07-15 21:17:04.044746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:51.237 [2024-07-15 21:17:04.044756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.044761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.044771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.044776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:51.238 [2024-07-15 21:17:04.045974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.238 [2024-07-15 21:17:04.045979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.045990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.045994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.046009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.046025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.046040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.046119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.046136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.046151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.046166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.046181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.046196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.046211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.046226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.046678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.046694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.046709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.046724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.239 [2024-07-15 21:17:04.046739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.239 [2024-07-15 21:17:04.046754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.239 [2024-07-15 21:17:04.046771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.239 [2024-07-15 21:17:04.046787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.239 [2024-07-15 21:17:04.046802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.239 [2024-07-15 21:17:04.046817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.239 [2024-07-15 21:17:04.046832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.046847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.046862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.046877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.046887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.046892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.047593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.047600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.047611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.047616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.047626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.239 [2024-07-15 21:17:04.047631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:51.239 [2024-07-15 21:17:04.047643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.047648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.047658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.047663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.047673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.047678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.047688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.047693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.047703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.047708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.047718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.047723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.047733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.047738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.047748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.047753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.047763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.047768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.047778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.047783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.047793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.047798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.047808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.047813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.047823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.047829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.047839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.047844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.047986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.047993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.048003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.048008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.048018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.048023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.048033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.048039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.048049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.048054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.048064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.048069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.048079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.048084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.048094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.048099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.048109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.048114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.048125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.048130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.048140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.048147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.048157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.048162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.048172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.048177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.048188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.048193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.048204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.240 [2024-07-15 21:17:04.048209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.048219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.048224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.048354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.048360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.048371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.048376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.048386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.048391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.048402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.048407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:51.240 [2024-07-15 21:17:04.048417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.240 [2024-07-15 21:17:04.048421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.048432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.048437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.048447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.048452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.048464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.048470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.048542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.048549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.048559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.048565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.048575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.048580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.048590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.048595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.048605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.048610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.048621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.048626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.048636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.048641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.048652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.048657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.048752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.048759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.048770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.048775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.048785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.048790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.048801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.048806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.048816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.048822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.048832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.048838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.048848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.048853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.048863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.048868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.049061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.049067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.049078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.049083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.049094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.049099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.049109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.049114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.049124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.049129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.049139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.049144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.049154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.049159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.049170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.049176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.049254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.049260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.049271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.049276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.049286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.049291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.049301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.049307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.049317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.049322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.049332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.049337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.049347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.049352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.049362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.241 [2024-07-15 21:17:04.049367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.241 [2024-07-15 21:17:04.049580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.049586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.049598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.049603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.049613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.049618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.049628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.049635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.049645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.049650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.049660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.049665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.049675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.049680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.049691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.049696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.049924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.049931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.049942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.049947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.049957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.049962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.049972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.049978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.049988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.049993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:51.242 [2024-07-15 21:17:04.050825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.242 [2024-07-15 21:17:04.050830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.050840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.050845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.050855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.050859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.050870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.050874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.050885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.050889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.050900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.050904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.243 [2024-07-15 21:17:04.051347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.243 [2024-07-15 21:17:04.051362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.243 [2024-07-15 21:17:04.051377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.243 [2024-07-15 21:17:04.051392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.243 [2024-07-15 21:17:04.051407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.243 [2024-07-15 21:17:04.051422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.243 [2024-07-15 21:17:04.051437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.243 [2024-07-15 21:17:04.051978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:51.243 [2024-07-15 21:17:04.051988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.051993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.244 [2024-07-15 21:17:04.052581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:51.244 [2024-07-15 21:17:04.052901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.244 [2024-07-15 21:17:04.052906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.052916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.052921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.052932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.052937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.053753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.053758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.054041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.054048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.054058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.054063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.054073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.054079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.054089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.054094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.054104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.054109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.054119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.054124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.054136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.054141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.054151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.054156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.054416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.054423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.054434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.054438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.054449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.245 [2024-07-15 21:17:04.054453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:51.245 [2024-07-15 21:17:04.054463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.054478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.054493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.054509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.054524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.054602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.054619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.054634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.054650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.054665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.054680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.054695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.054710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.054881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.054899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.054915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.054930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.054945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.054960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.054975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.054992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.054997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.055239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.055246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.055256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.055261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.055272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.055276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.055287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.055291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.055301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.055306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.055316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.055321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.055331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.055336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.055347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.055353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.055427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.055434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.055444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.055449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.055459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.055464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.055476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.055481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.055491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.055496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.055506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.055511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.055521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.055527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.055538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.246 [2024-07-15 21:17:04.055543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:51.246 [2024-07-15 21:17:04.055959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.055966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.055977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.055982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.055992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.055997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.056007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.056013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.056023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.247 [2024-07-15 21:17:04.056028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.056038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.247 [2024-07-15 21:17:04.056043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.056054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.247 [2024-07-15 21:17:04.056059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.056070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.247 [2024-07-15 21:17:04.056075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.056085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.247 [2024-07-15 21:17:04.056090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.056100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.247 [2024-07-15 21:17:04.056105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.056116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.247 [2024-07-15 21:17:04.056121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.056131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.056136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.056146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.056150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.056160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.056165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.056176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.056180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.247 [2024-07-15 21:17:04.057572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:51.247 [2024-07-15 21:17:04.057582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.057587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.057597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.057602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.057612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.057617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.057627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.057632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.057642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.057647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.057657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.057662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.057674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.057679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.057689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.057694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.057704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.248 [2024-07-15 21:17:04.057709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.057719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.057724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.057852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.057858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.057869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.057874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.057884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.057889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.057899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.057904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.057914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.057919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.057929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.057934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.057943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.057949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.057958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.057964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.057975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.057980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.057990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.057995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.058005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.058010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.058020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.058025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.058035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.058040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.058050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.058055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.058065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.058070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.058080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.058085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.058214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.058220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.058235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.058240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.058251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.058255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.058265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.058270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.058280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.058288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.058298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.058303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.058313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.058318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.058328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.058333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.058409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.058416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.058426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.248 [2024-07-15 21:17:04.058431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:51.248 [2024-07-15 21:17:04.058442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.058986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.058996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.059001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.059259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.059266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.059277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.059283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.059294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.059299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.059310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.059315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.059326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.059331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.059342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.059346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.059358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.059363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.059374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.059379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.059607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.059613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.059627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.059632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.059643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.059648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.059660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.059665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.059677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.059681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.059693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.059697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.059709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.059714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.059725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.059730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.059976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.059982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:51.249 [2024-07-15 21:17:04.059994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.249 [2024-07-15 21:17:04.059999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.250 [2024-07-15 21:17:04.060739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.250 [2024-07-15 21:17:04.060757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.250 [2024-07-15 21:17:04.060775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.250 [2024-07-15 21:17:04.060793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.250 [2024-07-15 21:17:04.060811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.250 [2024-07-15 21:17:04.060829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.250 [2024-07-15 21:17:04.060847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.060984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.060998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.061002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.061016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.061021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.061035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.061040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.061053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.061058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.061072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.061077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.061091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.061096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.061223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.061233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.250 [2024-07-15 21:17:04.061248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.250 [2024-07-15 21:17:04.061253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.061268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.061273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.061288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.061294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.061308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.061313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.061327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.061332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.061346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.061351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.061365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.061371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.061556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.061562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.061577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.061582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.061597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.061602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.061617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.061622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.061636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.061641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.061656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.061661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.061676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.061681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.061696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.061702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.061883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.061889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.061905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.061910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.061926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.061931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.061946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.061951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.061966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.061971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.061986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.061991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.062011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.251 [2024-07-15 21:17:04.062031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.062052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.062188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.062209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.062235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.062256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.062276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.062297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.062318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.062338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.062448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.062470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.062491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.062513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.062534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.062555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.062576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.062599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.062799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.251 [2024-07-15 21:17:04.062821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:51.251 [2024-07-15 21:17:04.062838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.252 [2024-07-15 21:17:04.062843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:04.062859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.252 [2024-07-15 21:17:04.062864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:04.062881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.252 [2024-07-15 21:17:04.062886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:04.062903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.252 [2024-07-15 21:17:04.062908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:04.062924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.252 [2024-07-15 21:17:04.062929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:04.062946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.252 [2024-07-15 21:17:04.062951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:04.063098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.252 [2024-07-15 21:17:04.063104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:04.063122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.252 [2024-07-15 21:17:04.063127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:04.063144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.252 [2024-07-15 21:17:04.063149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:04.063168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.252 [2024-07-15 21:17:04.063173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:04.063190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.252 [2024-07-15 21:17:04.063195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:04.063212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.252 [2024-07-15 21:17:04.063217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:04.063237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.252 [2024-07-15 21:17:04.063243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:04.063260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.252 [2024-07-15 21:17:04.063265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:16.095443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.252 [2024-07-15 21:17:16.095483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:16.095516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.252 [2024-07-15 21:17:16.095522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:16.095533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.252 [2024-07-15 21:17:16.095538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:16.095549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.252 [2024-07-15 21:17:16.095554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:16.095565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.252 [2024-07-15 21:17:16.095570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:16.095990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.252 [2024-07-15 21:17:16.095998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:16.096009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.252 [2024-07-15 21:17:16.096014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:16.096025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.252 [2024-07-15 21:17:16.096037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:51.252 [2024-07-15 21:17:16.097040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.252 [2024-07-15 21:17:16.097056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:51.252 Received shutdown signal, test time was about 25.666904 seconds 00:26:51.252 00:26:51.252 Latency(us) 00:26:51.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.252 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:51.252 Verification LBA range: start 0x0 length 0x4000 00:26:51.252 Nvme0n1 : 25.67 10906.99 42.61 0.00 0.00 11717.96 436.91 3075822.93 00:26:51.252 =================================================================================================================== 00:26:51.252 Total : 10906.99 42.61 0.00 0.00 11717.96 436.91 3075822.93 00:26:51.252 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:51.517 rmmod nvme_tcp 00:26:51.517 rmmod nvme_fabrics 00:26:51.517 rmmod nvme_keyring 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2101838 ']' 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2101838 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2101838 ']' 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2101838 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2101838 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2101838' 00:26:51.517 killing process with pid 2101838 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2101838 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2101838 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:51.517 21:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.065 21:17:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:54.065 00:26:54.065 real 0m39.843s 00:26:54.065 user 1m41.497s 00:26:54.065 sys 0m11.049s 00:26:54.065 21:17:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:54.065 21:17:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:54.065 ************************************ 00:26:54.065 END TEST nvmf_host_multipath_status 00:26:54.065 ************************************ 00:26:54.065 21:17:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:54.065 21:17:20 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:54.065 21:17:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:54.065 21:17:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:54.065 21:17:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:54.065 ************************************ 00:26:54.065 START TEST nvmf_discovery_remove_ifc 00:26:54.065 ************************************ 00:26:54.065 21:17:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:54.065 * Looking for test storage... 00:26:54.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:54.065 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:54.066 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:54.066 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:54.066 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:54.066 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:54.066 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:54.066 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:54.066 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:54.066 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:54.066 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.066 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:54.066 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.066 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:54.066 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:54.066 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:54.066 21:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:02.209 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:02.209 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:02.209 Found net devices under 0000:31:00.0: cvl_0_0 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:02.209 Found net devices under 0000:31:00.1: cvl_0_1 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:02.209 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:02.210 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:02.210 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:02.210 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:02.210 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.210 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:02.210 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:02.210 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:02.210 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:02.210 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:02.210 21:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:02.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:02.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:27:02.210 00:27:02.210 --- 10.0.0.2 ping statistics --- 00:27:02.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.210 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:02.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:02.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:27:02.210 00:27:02.210 --- 10.0.0.1 ping statistics --- 00:27:02.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.210 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2112370 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2112370 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2112370 ']' 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:02.210 21:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:02.210 [2024-07-15 21:17:29.251379] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:27:02.210 [2024-07-15 21:17:29.251445] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.210 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.210 [2024-07-15 21:17:29.347047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.210 [2024-07-15 21:17:29.439536] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.210 [2024-07-15 21:17:29.439591] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.210 [2024-07-15 21:17:29.439598] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.210 [2024-07-15 21:17:29.439605] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.210 [2024-07-15 21:17:29.439611] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.210 [2024-07-15 21:17:29.439635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.781 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:02.781 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:02.781 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:02.781 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:02.781 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.041 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:03.041 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:03.041 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.041 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.041 [2024-07-15 21:17:30.088246] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.041 [2024-07-15 21:17:30.096491] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:03.041 null0 00:27:03.041 [2024-07-15 21:17:30.128460] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.041 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.041 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2112641 00:27:03.041 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2112641 /tmp/host.sock 00:27:03.041 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:03.041 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2112641 ']' 00:27:03.041 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:27:03.041 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:03.041 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:03.041 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:03.041 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:03.041 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.041 [2024-07-15 21:17:30.212587] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:27:03.041 [2024-07-15 21:17:30.212658] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2112641 ] 00:27:03.041 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.041 [2024-07-15 21:17:30.283175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.301 [2024-07-15 21:17:30.357751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.871 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:03.871 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:03.871 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:03.871 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:03.871 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.871 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.871 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.871 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:03.871 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.871 21:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.871 21:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.871 21:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:03.871 21:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.871 21:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.822 [2024-07-15 21:17:32.056197] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:04.822 [2024-07-15 21:17:32.056221] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:04.822 [2024-07-15 21:17:32.056237] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:05.084 [2024-07-15 21:17:32.185672] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:05.343 [2024-07-15 21:17:32.413625] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:05.343 [2024-07-15 21:17:32.413683] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:05.343 [2024-07-15 21:17:32.413707] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:05.343 [2024-07-15 21:17:32.413720] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:05.343 [2024-07-15 21:17:32.413741] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:05.343 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.343 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:05.343 [2024-07-15 21:17:32.416116] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb06690 was disconnected and freed. delete nvme_qpair. 00:27:05.343 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:05.343 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:05.343 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.343 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:05.343 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.343 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:05.343 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:05.343 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.343 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:05.343 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:05.343 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:05.343 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:05.343 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:05.343 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:05.344 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:05.344 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:05.344 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.344 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.344 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:05.603 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.603 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:05.603 21:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:06.542 21:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.542 21:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.542 21:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.542 21:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.542 21:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.542 21:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.542 21:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.542 21:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.542 21:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:06.542 21:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:07.517 21:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:07.517 21:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.517 21:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:07.517 21:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.517 21:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:07.517 21:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:07.517 21:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:07.517 21:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.517 21:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:07.517 21:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:08.898 21:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:08.898 21:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.898 21:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.898 21:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:08.898 21:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:08.898 21:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.898 21:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:08.898 21:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.898 21:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:08.898 21:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:09.838 21:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:09.838 21:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:09.838 21:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:09.838 21:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.838 21:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:09.838 21:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:09.838 21:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:09.838 21:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.838 21:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:09.838 21:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:10.780 [2024-07-15 21:17:37.854047] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:10.780 [2024-07-15 21:17:37.854091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.780 [2024-07-15 21:17:37.854103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.780 [2024-07-15 21:17:37.854118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.780 [2024-07-15 21:17:37.854126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.780 [2024-07-15 21:17:37.854134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.780 [2024-07-15 21:17:37.854141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.780 [2024-07-15 21:17:37.854148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.780 [2024-07-15 21:17:37.854155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.780 [2024-07-15 21:17:37.854164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.780 [2024-07-15 21:17:37.854171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.780 [2024-07-15 21:17:37.854178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacd110 is same with the state(5) to be set 00:27:10.780 [2024-07-15 21:17:37.864066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacd110 (9): Bad file descriptor 00:27:10.780 [2024-07-15 21:17:37.874106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:10.780 21:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:10.780 21:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:10.780 21:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:10.780 21:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.780 21:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:10.780 21:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:10.780 21:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:11.721 [2024-07-15 21:17:38.897267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:11.721 [2024-07-15 21:17:38.897308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd110 with addr=10.0.0.2, port=4420 00:27:11.721 [2024-07-15 21:17:38.897320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacd110 is same with the state(5) to be set 00:27:11.721 [2024-07-15 21:17:38.897345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacd110 (9): Bad file descriptor 00:27:11.721 [2024-07-15 21:17:38.897718] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:11.721 [2024-07-15 21:17:38.897735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:11.721 [2024-07-15 21:17:38.897743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:11.721 [2024-07-15 21:17:38.897751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:11.721 [2024-07-15 21:17:38.897768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.721 [2024-07-15 21:17:38.897776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:11.721 21:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.721 21:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:11.721 21:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:12.662 [2024-07-15 21:17:39.900150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:12.662 [2024-07-15 21:17:39.900172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:12.662 [2024-07-15 21:17:39.900180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:12.662 [2024-07-15 21:17:39.900188] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:12.662 [2024-07-15 21:17:39.900201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.662 [2024-07-15 21:17:39.900222] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:12.662 [2024-07-15 21:17:39.900248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.662 [2024-07-15 21:17:39.900259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.662 [2024-07-15 21:17:39.900271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.662 [2024-07-15 21:17:39.900278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.662 [2024-07-15 21:17:39.900287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.662 [2024-07-15 21:17:39.900293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.662 [2024-07-15 21:17:39.900301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.662 [2024-07-15 21:17:39.900308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.662 [2024-07-15 21:17:39.900316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.662 [2024-07-15 21:17:39.900323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.662 [2024-07-15 21:17:39.900331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:12.662 [2024-07-15 21:17:39.900740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacc590 (9): Bad file descriptor 00:27:12.662 [2024-07-15 21:17:39.901752] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:12.662 [2024-07-15 21:17:39.901763] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:12.662 21:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:12.662 21:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.662 21:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:12.662 21:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.662 21:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:12.662 21:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:12.662 21:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:12.662 21:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.923 21:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:12.923 21:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:12.923 21:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:12.923 21:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:12.923 21:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:12.923 21:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.923 21:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:12.923 21:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.923 21:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:12.923 21:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:12.923 21:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:12.923 21:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.923 21:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:12.923 21:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:13.862 21:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:13.862 21:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:13.862 21:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:13.862 21:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.862 21:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:13.862 21:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:13.862 21:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:14.121 21:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.121 21:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:14.121 21:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:14.692 [2024-07-15 21:17:41.952424] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:14.692 [2024-07-15 21:17:41.952442] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:14.692 [2024-07-15 21:17:41.952456] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:14.953 [2024-07-15 21:17:42.081887] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:14.953 [2024-07-15 21:17:42.184634] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:14.953 [2024-07-15 21:17:42.184672] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:14.953 [2024-07-15 21:17:42.184691] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:14.953 [2024-07-15 21:17:42.184705] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:14.953 [2024-07-15 21:17:42.184713] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:14.953 [2024-07-15 21:17:42.190373] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb123a0 was disconnected and freed. delete nvme_qpair. 00:27:14.953 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:14.953 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.953 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:14.953 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.953 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:14.953 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.953 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:14.953 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2112641 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2112641 ']' 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2112641 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2112641 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2112641' 00:27:15.214 killing process with pid 2112641 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2112641 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2112641 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:15.214 rmmod nvme_tcp 00:27:15.214 rmmod nvme_fabrics 00:27:15.214 rmmod nvme_keyring 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2112370 ']' 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2112370 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2112370 ']' 00:27:15.214 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2112370 00:27:15.474 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:15.474 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:15.474 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2112370 00:27:15.474 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:15.474 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:15.474 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2112370' 00:27:15.474 killing process with pid 2112370 00:27:15.474 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2112370 00:27:15.474 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2112370 00:27:15.474 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:15.474 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:15.474 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:15.474 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:15.474 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:15.474 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.474 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:15.474 21:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.019 21:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:18.019 00:27:18.019 real 0m23.823s 00:27:18.019 user 0m27.557s 00:27:18.019 sys 0m7.242s 00:27:18.019 21:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:18.019 21:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:18.019 ************************************ 00:27:18.019 END TEST nvmf_discovery_remove_ifc 00:27:18.019 ************************************ 00:27:18.019 21:17:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:18.019 21:17:44 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:18.019 21:17:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:18.019 21:17:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:18.019 21:17:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:18.019 ************************************ 00:27:18.019 START TEST nvmf_identify_kernel_target 00:27:18.019 ************************************ 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:18.019 * Looking for test storage... 00:27:18.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:18.019 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.020 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.020 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:18.020 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:18.020 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:18.020 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:18.020 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:18.020 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.020 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:18.020 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:18.020 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:18.020 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.020 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:18.020 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.020 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:18.020 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:18.020 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:18.020 21:17:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:26.174 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:26.174 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:26.174 Found net devices under 0000:31:00.0: cvl_0_0 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:26.174 Found net devices under 0000:31:00.1: cvl_0_1 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:26.174 21:17:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:26.174 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:26.174 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:26.174 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:26.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:27:26.174 00:27:26.174 --- 10.0.0.2 ping statistics --- 00:27:26.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.174 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:27:26.174 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:26.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:27:26.174 00:27:26.174 --- 10.0.0.1 ping statistics --- 00:27:26.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.174 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:27:26.174 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.174 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:26.174 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:26.174 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.174 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:26.174 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:26.174 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:26.175 21:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:30.383 Waiting for block devices as requested 00:27:30.383 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:30.383 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:30.383 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:30.383 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:30.383 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:30.383 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:30.383 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:30.383 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:30.383 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:30.383 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:30.643 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:30.643 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:30.643 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:30.643 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:30.904 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:30.904 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:30.904 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:30.904 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:30.904 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:30.904 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:30.904 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:30.904 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:30.904 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:30.904 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:30.904 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:30.904 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:30.904 No valid GPT data, bailing 00:27:31.165 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:31.165 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:31.165 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:31.165 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:31.165 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:31.165 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:31.165 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:31.165 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:31.165 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:31.165 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:31.165 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:31.165 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:31.165 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:31.165 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:31.165 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:31.165 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:31.165 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:31.165 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:27:31.165 00:27:31.165 Discovery Log Number of Records 2, Generation counter 2 00:27:31.165 =====Discovery Log Entry 0====== 00:27:31.165 trtype: tcp 00:27:31.165 adrfam: ipv4 00:27:31.165 subtype: current discovery subsystem 00:27:31.165 treq: not specified, sq flow control disable supported 00:27:31.165 portid: 1 00:27:31.165 trsvcid: 4420 00:27:31.165 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:31.165 traddr: 10.0.0.1 00:27:31.165 eflags: none 00:27:31.165 sectype: none 00:27:31.165 =====Discovery Log Entry 1====== 00:27:31.165 trtype: tcp 00:27:31.165 adrfam: ipv4 00:27:31.165 subtype: nvme subsystem 00:27:31.165 treq: not specified, sq flow control disable supported 00:27:31.165 portid: 1 00:27:31.165 trsvcid: 4420 00:27:31.165 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:31.165 traddr: 10.0.0.1 00:27:31.165 eflags: none 00:27:31.165 sectype: none 00:27:31.165 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:31.165 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:31.165 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.165 ===================================================== 00:27:31.165 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:31.165 ===================================================== 00:27:31.165 Controller Capabilities/Features 00:27:31.165 ================================ 00:27:31.165 Vendor ID: 0000 00:27:31.165 Subsystem Vendor ID: 0000 00:27:31.165 Serial Number: a36de9c5610bca140361 00:27:31.165 Model Number: Linux 00:27:31.165 Firmware Version: 6.7.0-68 00:27:31.165 Recommended Arb Burst: 0 00:27:31.165 IEEE OUI Identifier: 00 00 00 00:27:31.165 Multi-path I/O 00:27:31.165 May have multiple subsystem ports: No 00:27:31.165 May have multiple controllers: No 00:27:31.165 Associated with SR-IOV VF: No 00:27:31.165 Max Data Transfer Size: Unlimited 00:27:31.165 Max Number of Namespaces: 0 00:27:31.165 Max Number of I/O Queues: 1024 00:27:31.165 NVMe Specification Version (VS): 1.3 00:27:31.165 NVMe Specification Version (Identify): 1.3 00:27:31.165 Maximum Queue Entries: 1024 00:27:31.165 Contiguous Queues Required: No 00:27:31.165 Arbitration Mechanisms Supported 00:27:31.166 Weighted Round Robin: Not Supported 00:27:31.166 Vendor Specific: Not Supported 00:27:31.166 Reset Timeout: 7500 ms 00:27:31.166 Doorbell Stride: 4 bytes 00:27:31.166 NVM Subsystem Reset: Not Supported 00:27:31.166 Command Sets Supported 00:27:31.166 NVM Command Set: Supported 00:27:31.166 Boot Partition: Not Supported 00:27:31.166 Memory Page Size Minimum: 4096 bytes 00:27:31.166 Memory Page Size Maximum: 4096 bytes 00:27:31.166 Persistent Memory Region: Not Supported 00:27:31.166 Optional Asynchronous Events Supported 00:27:31.166 Namespace Attribute Notices: Not Supported 00:27:31.166 Firmware Activation Notices: Not Supported 00:27:31.166 ANA Change Notices: Not Supported 00:27:31.166 PLE Aggregate Log Change Notices: Not Supported 00:27:31.166 LBA Status Info Alert Notices: Not Supported 00:27:31.166 EGE Aggregate Log Change Notices: Not Supported 00:27:31.166 Normal NVM Subsystem Shutdown event: Not Supported 00:27:31.166 Zone Descriptor Change Notices: Not Supported 00:27:31.166 Discovery Log Change Notices: Supported 00:27:31.166 Controller Attributes 00:27:31.166 128-bit Host Identifier: Not Supported 00:27:31.166 Non-Operational Permissive Mode: Not Supported 00:27:31.166 NVM Sets: Not Supported 00:27:31.166 Read Recovery Levels: Not Supported 00:27:31.166 Endurance Groups: Not Supported 00:27:31.166 Predictable Latency Mode: Not Supported 00:27:31.166 Traffic Based Keep ALive: Not Supported 00:27:31.166 Namespace Granularity: Not Supported 00:27:31.166 SQ Associations: Not Supported 00:27:31.166 UUID List: Not Supported 00:27:31.166 Multi-Domain Subsystem: Not Supported 00:27:31.166 Fixed Capacity Management: Not Supported 00:27:31.166 Variable Capacity Management: Not Supported 00:27:31.166 Delete Endurance Group: Not Supported 00:27:31.166 Delete NVM Set: Not Supported 00:27:31.166 Extended LBA Formats Supported: Not Supported 00:27:31.166 Flexible Data Placement Supported: Not Supported 00:27:31.166 00:27:31.166 Controller Memory Buffer Support 00:27:31.166 ================================ 00:27:31.166 Supported: No 00:27:31.166 00:27:31.166 Persistent Memory Region Support 00:27:31.166 ================================ 00:27:31.166 Supported: No 00:27:31.166 00:27:31.166 Admin Command Set Attributes 00:27:31.166 ============================ 00:27:31.166 Security Send/Receive: Not Supported 00:27:31.166 Format NVM: Not Supported 00:27:31.166 Firmware Activate/Download: Not Supported 00:27:31.166 Namespace Management: Not Supported 00:27:31.166 Device Self-Test: Not Supported 00:27:31.166 Directives: Not Supported 00:27:31.166 NVMe-MI: Not Supported 00:27:31.166 Virtualization Management: Not Supported 00:27:31.166 Doorbell Buffer Config: Not Supported 00:27:31.166 Get LBA Status Capability: Not Supported 00:27:31.166 Command & Feature Lockdown Capability: Not Supported 00:27:31.166 Abort Command Limit: 1 00:27:31.166 Async Event Request Limit: 1 00:27:31.166 Number of Firmware Slots: N/A 00:27:31.166 Firmware Slot 1 Read-Only: N/A 00:27:31.166 Firmware Activation Without Reset: N/A 00:27:31.166 Multiple Update Detection Support: N/A 00:27:31.166 Firmware Update Granularity: No Information Provided 00:27:31.166 Per-Namespace SMART Log: No 00:27:31.166 Asymmetric Namespace Access Log Page: Not Supported 00:27:31.166 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:31.166 Command Effects Log Page: Not Supported 00:27:31.166 Get Log Page Extended Data: Supported 00:27:31.166 Telemetry Log Pages: Not Supported 00:27:31.166 Persistent Event Log Pages: Not Supported 00:27:31.166 Supported Log Pages Log Page: May Support 00:27:31.166 Commands Supported & Effects Log Page: Not Supported 00:27:31.166 Feature Identifiers & Effects Log Page:May Support 00:27:31.166 NVMe-MI Commands & Effects Log Page: May Support 00:27:31.166 Data Area 4 for Telemetry Log: Not Supported 00:27:31.166 Error Log Page Entries Supported: 1 00:27:31.166 Keep Alive: Not Supported 00:27:31.166 00:27:31.166 NVM Command Set Attributes 00:27:31.166 ========================== 00:27:31.166 Submission Queue Entry Size 00:27:31.166 Max: 1 00:27:31.166 Min: 1 00:27:31.166 Completion Queue Entry Size 00:27:31.166 Max: 1 00:27:31.166 Min: 1 00:27:31.166 Number of Namespaces: 0 00:27:31.166 Compare Command: Not Supported 00:27:31.166 Write Uncorrectable Command: Not Supported 00:27:31.166 Dataset Management Command: Not Supported 00:27:31.166 Write Zeroes Command: Not Supported 00:27:31.166 Set Features Save Field: Not Supported 00:27:31.166 Reservations: Not Supported 00:27:31.166 Timestamp: Not Supported 00:27:31.166 Copy: Not Supported 00:27:31.166 Volatile Write Cache: Not Present 00:27:31.166 Atomic Write Unit (Normal): 1 00:27:31.166 Atomic Write Unit (PFail): 1 00:27:31.166 Atomic Compare & Write Unit: 1 00:27:31.166 Fused Compare & Write: Not Supported 00:27:31.166 Scatter-Gather List 00:27:31.166 SGL Command Set: Supported 00:27:31.166 SGL Keyed: Not Supported 00:27:31.166 SGL Bit Bucket Descriptor: Not Supported 00:27:31.166 SGL Metadata Pointer: Not Supported 00:27:31.166 Oversized SGL: Not Supported 00:27:31.166 SGL Metadata Address: Not Supported 00:27:31.166 SGL Offset: Supported 00:27:31.166 Transport SGL Data Block: Not Supported 00:27:31.166 Replay Protected Memory Block: Not Supported 00:27:31.166 00:27:31.166 Firmware Slot Information 00:27:31.166 ========================= 00:27:31.166 Active slot: 0 00:27:31.166 00:27:31.166 00:27:31.166 Error Log 00:27:31.166 ========= 00:27:31.166 00:27:31.166 Active Namespaces 00:27:31.166 ================= 00:27:31.166 Discovery Log Page 00:27:31.166 ================== 00:27:31.166 Generation Counter: 2 00:27:31.166 Number of Records: 2 00:27:31.166 Record Format: 0 00:27:31.166 00:27:31.166 Discovery Log Entry 0 00:27:31.166 ---------------------- 00:27:31.166 Transport Type: 3 (TCP) 00:27:31.166 Address Family: 1 (IPv4) 00:27:31.166 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:31.166 Entry Flags: 00:27:31.166 Duplicate Returned Information: 0 00:27:31.166 Explicit Persistent Connection Support for Discovery: 0 00:27:31.166 Transport Requirements: 00:27:31.166 Secure Channel: Not Specified 00:27:31.166 Port ID: 1 (0x0001) 00:27:31.166 Controller ID: 65535 (0xffff) 00:27:31.166 Admin Max SQ Size: 32 00:27:31.166 Transport Service Identifier: 4420 00:27:31.166 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:31.166 Transport Address: 10.0.0.1 00:27:31.166 Discovery Log Entry 1 00:27:31.166 ---------------------- 00:27:31.166 Transport Type: 3 (TCP) 00:27:31.166 Address Family: 1 (IPv4) 00:27:31.166 Subsystem Type: 2 (NVM Subsystem) 00:27:31.166 Entry Flags: 00:27:31.166 Duplicate Returned Information: 0 00:27:31.166 Explicit Persistent Connection Support for Discovery: 0 00:27:31.166 Transport Requirements: 00:27:31.166 Secure Channel: Not Specified 00:27:31.166 Port ID: 1 (0x0001) 00:27:31.166 Controller ID: 65535 (0xffff) 00:27:31.166 Admin Max SQ Size: 32 00:27:31.166 Transport Service Identifier: 4420 00:27:31.166 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:31.166 Transport Address: 10.0.0.1 00:27:31.166 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:31.166 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.166 get_feature(0x01) failed 00:27:31.166 get_feature(0x02) failed 00:27:31.166 get_feature(0x04) failed 00:27:31.166 ===================================================== 00:27:31.166 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:31.166 ===================================================== 00:27:31.166 Controller Capabilities/Features 00:27:31.166 ================================ 00:27:31.166 Vendor ID: 0000 00:27:31.166 Subsystem Vendor ID: 0000 00:27:31.166 Serial Number: 87f16928894b36a6cb7c 00:27:31.166 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:31.166 Firmware Version: 6.7.0-68 00:27:31.166 Recommended Arb Burst: 6 00:27:31.166 IEEE OUI Identifier: 00 00 00 00:27:31.166 Multi-path I/O 00:27:31.166 May have multiple subsystem ports: Yes 00:27:31.166 May have multiple controllers: Yes 00:27:31.166 Associated with SR-IOV VF: No 00:27:31.166 Max Data Transfer Size: Unlimited 00:27:31.166 Max Number of Namespaces: 1024 00:27:31.166 Max Number of I/O Queues: 128 00:27:31.166 NVMe Specification Version (VS): 1.3 00:27:31.166 NVMe Specification Version (Identify): 1.3 00:27:31.166 Maximum Queue Entries: 1024 00:27:31.166 Contiguous Queues Required: No 00:27:31.166 Arbitration Mechanisms Supported 00:27:31.166 Weighted Round Robin: Not Supported 00:27:31.166 Vendor Specific: Not Supported 00:27:31.166 Reset Timeout: 7500 ms 00:27:31.166 Doorbell Stride: 4 bytes 00:27:31.166 NVM Subsystem Reset: Not Supported 00:27:31.166 Command Sets Supported 00:27:31.166 NVM Command Set: Supported 00:27:31.166 Boot Partition: Not Supported 00:27:31.166 Memory Page Size Minimum: 4096 bytes 00:27:31.166 Memory Page Size Maximum: 4096 bytes 00:27:31.166 Persistent Memory Region: Not Supported 00:27:31.166 Optional Asynchronous Events Supported 00:27:31.166 Namespace Attribute Notices: Supported 00:27:31.166 Firmware Activation Notices: Not Supported 00:27:31.166 ANA Change Notices: Supported 00:27:31.166 PLE Aggregate Log Change Notices: Not Supported 00:27:31.166 LBA Status Info Alert Notices: Not Supported 00:27:31.166 EGE Aggregate Log Change Notices: Not Supported 00:27:31.166 Normal NVM Subsystem Shutdown event: Not Supported 00:27:31.166 Zone Descriptor Change Notices: Not Supported 00:27:31.166 Discovery Log Change Notices: Not Supported 00:27:31.166 Controller Attributes 00:27:31.166 128-bit Host Identifier: Supported 00:27:31.166 Non-Operational Permissive Mode: Not Supported 00:27:31.166 NVM Sets: Not Supported 00:27:31.166 Read Recovery Levels: Not Supported 00:27:31.166 Endurance Groups: Not Supported 00:27:31.166 Predictable Latency Mode: Not Supported 00:27:31.166 Traffic Based Keep ALive: Supported 00:27:31.166 Namespace Granularity: Not Supported 00:27:31.166 SQ Associations: Not Supported 00:27:31.166 UUID List: Not Supported 00:27:31.166 Multi-Domain Subsystem: Not Supported 00:27:31.166 Fixed Capacity Management: Not Supported 00:27:31.166 Variable Capacity Management: Not Supported 00:27:31.166 Delete Endurance Group: Not Supported 00:27:31.166 Delete NVM Set: Not Supported 00:27:31.166 Extended LBA Formats Supported: Not Supported 00:27:31.166 Flexible Data Placement Supported: Not Supported 00:27:31.166 00:27:31.166 Controller Memory Buffer Support 00:27:31.166 ================================ 00:27:31.166 Supported: No 00:27:31.166 00:27:31.166 Persistent Memory Region Support 00:27:31.166 ================================ 00:27:31.166 Supported: No 00:27:31.166 00:27:31.166 Admin Command Set Attributes 00:27:31.166 ============================ 00:27:31.166 Security Send/Receive: Not Supported 00:27:31.166 Format NVM: Not Supported 00:27:31.166 Firmware Activate/Download: Not Supported 00:27:31.166 Namespace Management: Not Supported 00:27:31.166 Device Self-Test: Not Supported 00:27:31.166 Directives: Not Supported 00:27:31.166 NVMe-MI: Not Supported 00:27:31.166 Virtualization Management: Not Supported 00:27:31.166 Doorbell Buffer Config: Not Supported 00:27:31.166 Get LBA Status Capability: Not Supported 00:27:31.166 Command & Feature Lockdown Capability: Not Supported 00:27:31.166 Abort Command Limit: 4 00:27:31.166 Async Event Request Limit: 4 00:27:31.166 Number of Firmware Slots: N/A 00:27:31.166 Firmware Slot 1 Read-Only: N/A 00:27:31.166 Firmware Activation Without Reset: N/A 00:27:31.166 Multiple Update Detection Support: N/A 00:27:31.166 Firmware Update Granularity: No Information Provided 00:27:31.166 Per-Namespace SMART Log: Yes 00:27:31.166 Asymmetric Namespace Access Log Page: Supported 00:27:31.166 ANA Transition Time : 10 sec 00:27:31.166 00:27:31.166 Asymmetric Namespace Access Capabilities 00:27:31.166 ANA Optimized State : Supported 00:27:31.166 ANA Non-Optimized State : Supported 00:27:31.166 ANA Inaccessible State : Supported 00:27:31.166 ANA Persistent Loss State : Supported 00:27:31.166 ANA Change State : Supported 00:27:31.166 ANAGRPID is not changed : No 00:27:31.166 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:31.166 00:27:31.166 ANA Group Identifier Maximum : 128 00:27:31.166 Number of ANA Group Identifiers : 128 00:27:31.166 Max Number of Allowed Namespaces : 1024 00:27:31.166 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:31.166 Command Effects Log Page: Supported 00:27:31.166 Get Log Page Extended Data: Supported 00:27:31.166 Telemetry Log Pages: Not Supported 00:27:31.166 Persistent Event Log Pages: Not Supported 00:27:31.166 Supported Log Pages Log Page: May Support 00:27:31.167 Commands Supported & Effects Log Page: Not Supported 00:27:31.167 Feature Identifiers & Effects Log Page:May Support 00:27:31.167 NVMe-MI Commands & Effects Log Page: May Support 00:27:31.167 Data Area 4 for Telemetry Log: Not Supported 00:27:31.167 Error Log Page Entries Supported: 128 00:27:31.167 Keep Alive: Supported 00:27:31.167 Keep Alive Granularity: 1000 ms 00:27:31.167 00:27:31.167 NVM Command Set Attributes 00:27:31.167 ========================== 00:27:31.167 Submission Queue Entry Size 00:27:31.167 Max: 64 00:27:31.167 Min: 64 00:27:31.167 Completion Queue Entry Size 00:27:31.167 Max: 16 00:27:31.167 Min: 16 00:27:31.167 Number of Namespaces: 1024 00:27:31.167 Compare Command: Not Supported 00:27:31.167 Write Uncorrectable Command: Not Supported 00:27:31.167 Dataset Management Command: Supported 00:27:31.167 Write Zeroes Command: Supported 00:27:31.167 Set Features Save Field: Not Supported 00:27:31.167 Reservations: Not Supported 00:27:31.167 Timestamp: Not Supported 00:27:31.167 Copy: Not Supported 00:27:31.167 Volatile Write Cache: Present 00:27:31.167 Atomic Write Unit (Normal): 1 00:27:31.167 Atomic Write Unit (PFail): 1 00:27:31.167 Atomic Compare & Write Unit: 1 00:27:31.167 Fused Compare & Write: Not Supported 00:27:31.167 Scatter-Gather List 00:27:31.167 SGL Command Set: Supported 00:27:31.167 SGL Keyed: Not Supported 00:27:31.167 SGL Bit Bucket Descriptor: Not Supported 00:27:31.167 SGL Metadata Pointer: Not Supported 00:27:31.167 Oversized SGL: Not Supported 00:27:31.167 SGL Metadata Address: Not Supported 00:27:31.167 SGL Offset: Supported 00:27:31.167 Transport SGL Data Block: Not Supported 00:27:31.167 Replay Protected Memory Block: Not Supported 00:27:31.167 00:27:31.167 Firmware Slot Information 00:27:31.167 ========================= 00:27:31.167 Active slot: 0 00:27:31.167 00:27:31.167 Asymmetric Namespace Access 00:27:31.167 =========================== 00:27:31.167 Change Count : 0 00:27:31.167 Number of ANA Group Descriptors : 1 00:27:31.167 ANA Group Descriptor : 0 00:27:31.167 ANA Group ID : 1 00:27:31.167 Number of NSID Values : 1 00:27:31.167 Change Count : 0 00:27:31.167 ANA State : 1 00:27:31.167 Namespace Identifier : 1 00:27:31.167 00:27:31.167 Commands Supported and Effects 00:27:31.167 ============================== 00:27:31.167 Admin Commands 00:27:31.167 -------------- 00:27:31.167 Get Log Page (02h): Supported 00:27:31.167 Identify (06h): Supported 00:27:31.167 Abort (08h): Supported 00:27:31.167 Set Features (09h): Supported 00:27:31.167 Get Features (0Ah): Supported 00:27:31.167 Asynchronous Event Request (0Ch): Supported 00:27:31.167 Keep Alive (18h): Supported 00:27:31.167 I/O Commands 00:27:31.167 ------------ 00:27:31.167 Flush (00h): Supported 00:27:31.167 Write (01h): Supported LBA-Change 00:27:31.167 Read (02h): Supported 00:27:31.167 Write Zeroes (08h): Supported LBA-Change 00:27:31.167 Dataset Management (09h): Supported 00:27:31.167 00:27:31.167 Error Log 00:27:31.167 ========= 00:27:31.167 Entry: 0 00:27:31.167 Error Count: 0x3 00:27:31.167 Submission Queue Id: 0x0 00:27:31.167 Command Id: 0x5 00:27:31.167 Phase Bit: 0 00:27:31.167 Status Code: 0x2 00:27:31.167 Status Code Type: 0x0 00:27:31.167 Do Not Retry: 1 00:27:31.167 Error Location: 0x28 00:27:31.167 LBA: 0x0 00:27:31.167 Namespace: 0x0 00:27:31.167 Vendor Log Page: 0x0 00:27:31.167 ----------- 00:27:31.167 Entry: 1 00:27:31.167 Error Count: 0x2 00:27:31.167 Submission Queue Id: 0x0 00:27:31.167 Command Id: 0x5 00:27:31.167 Phase Bit: 0 00:27:31.167 Status Code: 0x2 00:27:31.167 Status Code Type: 0x0 00:27:31.167 Do Not Retry: 1 00:27:31.167 Error Location: 0x28 00:27:31.167 LBA: 0x0 00:27:31.167 Namespace: 0x0 00:27:31.167 Vendor Log Page: 0x0 00:27:31.167 ----------- 00:27:31.167 Entry: 2 00:27:31.167 Error Count: 0x1 00:27:31.167 Submission Queue Id: 0x0 00:27:31.167 Command Id: 0x4 00:27:31.167 Phase Bit: 0 00:27:31.167 Status Code: 0x2 00:27:31.167 Status Code Type: 0x0 00:27:31.167 Do Not Retry: 1 00:27:31.167 Error Location: 0x28 00:27:31.167 LBA: 0x0 00:27:31.167 Namespace: 0x0 00:27:31.167 Vendor Log Page: 0x0 00:27:31.167 00:27:31.167 Number of Queues 00:27:31.167 ================ 00:27:31.167 Number of I/O Submission Queues: 128 00:27:31.167 Number of I/O Completion Queues: 128 00:27:31.167 00:27:31.167 ZNS Specific Controller Data 00:27:31.167 ============================ 00:27:31.167 Zone Append Size Limit: 0 00:27:31.167 00:27:31.167 00:27:31.167 Active Namespaces 00:27:31.167 ================= 00:27:31.167 get_feature(0x05) failed 00:27:31.167 Namespace ID:1 00:27:31.167 Command Set Identifier: NVM (00h) 00:27:31.167 Deallocate: Supported 00:27:31.167 Deallocated/Unwritten Error: Not Supported 00:27:31.167 Deallocated Read Value: Unknown 00:27:31.167 Deallocate in Write Zeroes: Not Supported 00:27:31.167 Deallocated Guard Field: 0xFFFF 00:27:31.167 Flush: Supported 00:27:31.167 Reservation: Not Supported 00:27:31.167 Namespace Sharing Capabilities: Multiple Controllers 00:27:31.167 Size (in LBAs): 3750748848 (1788GiB) 00:27:31.167 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:31.167 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:31.167 UUID: f16cefca-fcf7-4110-9741-c2d703132fa3 00:27:31.167 Thin Provisioning: Not Supported 00:27:31.167 Per-NS Atomic Units: Yes 00:27:31.167 Atomic Write Unit (Normal): 8 00:27:31.167 Atomic Write Unit (PFail): 8 00:27:31.167 Preferred Write Granularity: 8 00:27:31.167 Atomic Compare & Write Unit: 8 00:27:31.167 Atomic Boundary Size (Normal): 0 00:27:31.167 Atomic Boundary Size (PFail): 0 00:27:31.167 Atomic Boundary Offset: 0 00:27:31.167 NGUID/EUI64 Never Reused: No 00:27:31.167 ANA group ID: 1 00:27:31.167 Namespace Write Protected: No 00:27:31.167 Number of LBA Formats: 1 00:27:31.167 Current LBA Format: LBA Format #00 00:27:31.167 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:31.167 00:27:31.167 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:31.167 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:31.167 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:31.167 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:31.167 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:31.167 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:31.167 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:31.167 rmmod nvme_tcp 00:27:31.167 rmmod nvme_fabrics 00:27:31.167 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:31.167 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:31.167 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:31.167 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:31.167 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:31.167 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:31.167 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:31.167 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:31.167 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:31.167 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.167 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:31.167 21:17:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.719 21:18:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:33.719 21:18:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:33.719 21:18:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:33.719 21:18:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:33.719 21:18:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:33.719 21:18:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:33.719 21:18:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:33.719 21:18:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:33.719 21:18:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:33.719 21:18:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:33.719 21:18:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:37.104 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:37.104 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:37.104 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:37.366 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:37.366 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:37.366 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:37.366 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:37.366 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:37.366 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:37.366 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:37.366 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:37.366 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:37.366 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:37.366 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:37.366 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:37.366 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:37.366 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:37.628 00:27:37.628 real 0m19.838s 00:27:37.628 user 0m5.412s 00:27:37.628 sys 0m11.571s 00:27:37.628 21:18:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:37.628 21:18:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:37.628 ************************************ 00:27:37.628 END TEST nvmf_identify_kernel_target 00:27:37.628 ************************************ 00:27:37.628 21:18:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:37.628 21:18:04 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:37.628 21:18:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:37.628 21:18:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:37.628 21:18:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:37.628 ************************************ 00:27:37.628 START TEST nvmf_auth_host 00:27:37.628 ************************************ 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:37.628 * Looking for test storage... 00:27:37.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:37.628 21:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:45.764 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:45.764 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:45.764 Found net devices under 0000:31:00.0: cvl_0_0 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:45.764 Found net devices under 0000:31:00.1: cvl_0_1 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.764 21:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.764 21:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.764 21:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:45.764 21:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:46.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:46.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:27:46.025 00:27:46.025 --- 10.0.0.2 ping statistics --- 00:27:46.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.025 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:46.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:46.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:27:46.025 00:27:46.025 --- 10.0.0.1 ping statistics --- 00:27:46.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.025 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2128623 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2128623 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2128623 ']' 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:46.025 21:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=48756cdf613fff619c55eee6dfd60a46 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.FTF 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 48756cdf613fff619c55eee6dfd60a46 0 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 48756cdf613fff619c55eee6dfd60a46 0 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=48756cdf613fff619c55eee6dfd60a46 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.FTF 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.FTF 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.FTF 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c540a8489bea825fa49ef66642f9dc8487cd72f5bfe1f812c0b602801fef2d38 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.tqt 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c540a8489bea825fa49ef66642f9dc8487cd72f5bfe1f812c0b602801fef2d38 3 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c540a8489bea825fa49ef66642f9dc8487cd72f5bfe1f812c0b602801fef2d38 3 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c540a8489bea825fa49ef66642f9dc8487cd72f5bfe1f812c0b602801fef2d38 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.tqt 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.tqt 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.tqt 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=79f4273e00dcc2ce79b5675f3f59355e39f460e6c7d3a6e7 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.tV4 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 79f4273e00dcc2ce79b5675f3f59355e39f460e6c7d3a6e7 0 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 79f4273e00dcc2ce79b5675f3f59355e39f460e6c7d3a6e7 0 00:27:46.966 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:46.967 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:46.967 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=79f4273e00dcc2ce79b5675f3f59355e39f460e6c7d3a6e7 00:27:46.967 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:46.967 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:46.967 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.tV4 00:27:46.967 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.tV4 00:27:46.967 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.tV4 00:27:46.967 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:46.967 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:46.967 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:46.967 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:46.967 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:46.967 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:46.967 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:46.967 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a7bb5c76ae0127cc3eacedadbdd5e149f774e5d8684a1004 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.xKu 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a7bb5c76ae0127cc3eacedadbdd5e149f774e5d8684a1004 2 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a7bb5c76ae0127cc3eacedadbdd5e149f774e5d8684a1004 2 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a7bb5c76ae0127cc3eacedadbdd5e149f774e5d8684a1004 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.xKu 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.xKu 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.xKu 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=655ea88be0cae4a70b4996b328569d3d 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.5Ci 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 655ea88be0cae4a70b4996b328569d3d 1 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 655ea88be0cae4a70b4996b328569d3d 1 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=655ea88be0cae4a70b4996b328569d3d 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.5Ci 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.5Ci 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.5Ci 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=077e784e3bc8a0895dcdf15bd81af78b 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.IJf 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 077e784e3bc8a0895dcdf15bd81af78b 1 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 077e784e3bc8a0895dcdf15bd81af78b 1 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=077e784e3bc8a0895dcdf15bd81af78b 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.IJf 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.IJf 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.IJf 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=487da51f73d290a50df48d24da707fcc3fa797869e95f5a7 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.mBc 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 487da51f73d290a50df48d24da707fcc3fa797869e95f5a7 2 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 487da51f73d290a50df48d24da707fcc3fa797869e95f5a7 2 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=487da51f73d290a50df48d24da707fcc3fa797869e95f5a7 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.mBc 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.mBc 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.mBc 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7b71c3ac3fdfce9186377f101c8a51d9 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.sIM 00:27:47.227 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7b71c3ac3fdfce9186377f101c8a51d9 0 00:27:47.228 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7b71c3ac3fdfce9186377f101c8a51d9 0 00:27:47.228 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:47.228 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:47.228 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7b71c3ac3fdfce9186377f101c8a51d9 00:27:47.228 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:47.228 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.sIM 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.sIM 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.sIM 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=75b63f5b885d590e4a40b5bfdeae3c76495e8afbde74af2b8151ae67fca12b6e 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.AQD 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 75b63f5b885d590e4a40b5bfdeae3c76495e8afbde74af2b8151ae67fca12b6e 3 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 75b63f5b885d590e4a40b5bfdeae3c76495e8afbde74af2b8151ae67fca12b6e 3 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=75b63f5b885d590e4a40b5bfdeae3c76495e8afbde74af2b8151ae67fca12b6e 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.AQD 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.AQD 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.AQD 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2128623 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2128623 ']' 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:47.489 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FTF 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.tqt ]] 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tqt 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.tV4 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.xKu ]] 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xKu 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.5Ci 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.IJf ]] 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IJf 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.mBc 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.sIM ]] 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.sIM 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.AQD 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.751 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.752 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.752 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.752 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.752 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.752 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.752 21:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:47.752 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:47.752 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:47.752 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:47.752 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:47.752 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:47.752 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:47.752 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:47.752 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:47.752 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:47.752 21:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:51.954 Waiting for block devices as requested 00:27:51.954 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:51.954 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:51.954 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:51.954 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:51.954 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:51.954 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:51.954 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:51.954 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:51.954 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:52.215 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:52.215 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:52.215 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:52.476 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:52.476 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:52.476 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:52.737 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:52.737 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:53.308 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:53.308 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:53.308 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:53.308 21:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:53.308 21:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:53.308 21:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:53.308 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:53.308 21:18:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:53.308 21:18:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:53.308 No valid GPT data, bailing 00:27:53.308 21:18:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:53.308 21:18:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:53.308 21:18:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:53.308 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:53.308 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:53.308 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:53.308 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:53.308 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:53.308 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:53.308 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:53.309 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:53.309 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:53.309 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:53.309 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:53.309 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:53.309 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:53.309 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:53.309 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:27:53.570 00:27:53.570 Discovery Log Number of Records 2, Generation counter 2 00:27:53.570 =====Discovery Log Entry 0====== 00:27:53.570 trtype: tcp 00:27:53.570 adrfam: ipv4 00:27:53.570 subtype: current discovery subsystem 00:27:53.570 treq: not specified, sq flow control disable supported 00:27:53.570 portid: 1 00:27:53.570 trsvcid: 4420 00:27:53.570 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:53.570 traddr: 10.0.0.1 00:27:53.570 eflags: none 00:27:53.570 sectype: none 00:27:53.570 =====Discovery Log Entry 1====== 00:27:53.570 trtype: tcp 00:27:53.570 adrfam: ipv4 00:27:53.570 subtype: nvme subsystem 00:27:53.570 treq: not specified, sq flow control disable supported 00:27:53.570 portid: 1 00:27:53.570 trsvcid: 4420 00:27:53.570 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:53.570 traddr: 10.0.0.1 00:27:53.570 eflags: none 00:27:53.570 sectype: none 00:27:53.570 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:53.570 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:53.570 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:53.570 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:53.570 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.570 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.570 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.570 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.570 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:27:53.570 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:27:53.570 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.570 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.570 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:27:53.570 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: ]] 00:27:53.570 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:27:53.570 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:53.570 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:53.570 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.571 nvme0n1 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: ]] 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.571 21:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.832 21:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.832 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.832 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.832 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.832 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.832 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.832 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.832 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.832 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.832 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.832 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.832 21:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.833 21:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.833 21:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.833 21:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.833 nvme0n1 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: ]] 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.833 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.094 nvme0n1 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: ]] 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.094 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.356 nvme0n1 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: ]] 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.356 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.618 nvme0n1 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.618 nvme0n1 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.618 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: ]] 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.880 21:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.880 nvme0n1 00:27:54.880 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.880 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.880 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.880 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.880 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.880 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: ]] 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.141 nvme0n1 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.141 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: ]] 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.402 nvme0n1 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.402 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: ]] 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:55.663 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.664 nvme0n1 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.664 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.924 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.924 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.924 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.924 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.924 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.924 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.924 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:55.924 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.924 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.924 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:55.924 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:55.924 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:27:55.924 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:55.924 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.924 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:55.925 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:27:55.925 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:55.925 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:55.925 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.925 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.925 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:55.925 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:55.925 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.925 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:55.925 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.925 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.925 21:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.925 21:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.925 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.925 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.925 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.925 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.925 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.925 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.925 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.925 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.925 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.925 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.925 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:55.925 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.925 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.925 nvme0n1 00:27:55.925 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.925 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.925 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.925 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.925 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.925 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.189 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.189 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.189 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.189 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.189 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.189 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:56.189 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.189 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:56.189 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.189 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.189 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.189 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: ]] 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.190 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.449 nvme0n1 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: ]] 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.449 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.709 nvme0n1 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: ]] 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.709 21:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.970 nvme0n1 00:27:56.970 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.970 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.970 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.970 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.970 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.970 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.970 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.970 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.970 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.970 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: ]] 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.231 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.491 nvme0n1 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.491 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.492 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.492 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.492 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.492 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.492 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.492 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.752 nvme0n1 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: ]] 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.752 21:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.753 21:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.753 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.753 21:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.323 nvme0n1 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: ]] 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.323 21:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.894 nvme0n1 00:27:58.894 21:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.894 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.894 21:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.894 21:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.894 21:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.894 21:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: ]] 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.894 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.466 nvme0n1 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: ]] 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.466 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.727 nvme0n1 00:27:59.727 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.727 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.727 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.727 21:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.727 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.727 21:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.988 21:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.247 nvme0n1 00:28:00.247 21:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.247 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.247 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.247 21:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.247 21:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.247 21:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: ]] 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.508 21:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.079 nvme0n1 00:28:01.079 21:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.079 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.079 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.079 21:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.079 21:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.079 21:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: ]] 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.340 21:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.911 nvme0n1 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: ]] 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.911 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.912 21:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.912 21:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.912 21:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.912 21:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.912 21:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.912 21:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.912 21:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.912 21:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.912 21:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.912 21:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.912 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:01.912 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.912 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.853 nvme0n1 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: ]] 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.853 21:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.854 21:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:02.854 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.854 21:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.796 nvme0n1 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.796 21:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.368 nvme0n1 00:28:04.368 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.368 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.368 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.368 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.368 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.368 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.368 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.368 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.368 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.368 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.368 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.368 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:04.368 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:04.368 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.368 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:04.368 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.368 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.368 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:04.368 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:04.368 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: ]] 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.369 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.630 nvme0n1 00:28:04.630 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.630 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.630 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.630 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.630 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.630 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.630 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.630 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.630 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.630 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.630 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: ]] 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.631 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.892 nvme0n1 00:28:04.892 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.892 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.892 21:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.892 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.892 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.892 21:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: ]] 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.892 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.154 nvme0n1 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: ]] 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.154 nvme0n1 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.154 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.416 nvme0n1 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:05.416 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:05.417 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: ]] 00:28:05.417 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:05.417 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:05.417 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.417 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.417 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:05.417 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:05.417 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.417 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:05.417 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.417 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.417 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.417 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.417 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.417 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.678 nvme0n1 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: ]] 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.678 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.939 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.939 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.939 21:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.939 21:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:05.939 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.940 21:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.940 nvme0n1 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: ]] 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.940 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.200 nvme0n1 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:06.200 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: ]] 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.201 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.461 nvme0n1 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.461 21:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.722 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:06.722 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.722 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.722 nvme0n1 00:28:06.722 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.722 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.722 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.722 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.722 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.722 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.722 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.722 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.722 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.722 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.722 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: ]] 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.723 21:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.723 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.723 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.723 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.723 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.723 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.723 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.723 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.723 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.723 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.723 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.723 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.723 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.983 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:06.983 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.983 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.244 nvme0n1 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: ]] 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.244 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:07.245 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.245 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.245 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.245 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.245 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.245 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.245 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.245 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.245 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.245 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.245 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.245 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.245 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.245 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.245 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.245 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.245 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.506 nvme0n1 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: ]] 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.506 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.507 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.507 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.507 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.507 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.507 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.507 21:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.507 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:07.507 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.507 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.767 nvme0n1 00:28:07.767 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.767 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.767 21:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.767 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.767 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.767 21:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: ]] 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.767 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.768 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:07.768 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:07.768 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.768 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:07.768 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.768 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.028 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.028 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.028 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.028 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.028 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.028 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.028 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.028 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.028 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.028 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.028 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.028 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.028 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:08.028 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.028 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.289 nvme0n1 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.289 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.550 nvme0n1 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: ]] 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.550 21:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.121 nvme0n1 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: ]] 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.121 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.693 nvme0n1 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: ]] 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.693 21:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.265 nvme0n1 00:28:10.265 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.265 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.265 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.265 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.265 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.265 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.265 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.265 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.265 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.265 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.265 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.265 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.265 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:10.265 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: ]] 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.266 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.526 nvme0n1 00:28:10.526 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.526 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.526 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.526 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.526 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.787 21:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.048 nvme0n1 00:28:11.048 21:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.308 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.308 21:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.308 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.308 21:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.308 21:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.308 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.308 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.308 21:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.308 21:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.308 21:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.308 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:11.308 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.308 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:11.308 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.308 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:11.308 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:11.308 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: ]] 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.309 21:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.879 nvme0n1 00:28:11.879 21:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.879 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.879 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.879 21:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.879 21:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.879 21:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: ]] 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.139 21:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.709 nvme0n1 00:28:12.709 21:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.709 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.709 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.709 21:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.709 21:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.709 21:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.709 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.709 21:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.709 21:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.709 21:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: ]] 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.032 21:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.630 nvme0n1 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: ]] 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.630 21:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.571 nvme0n1 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.571 21:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.140 nvme0n1 00:28:15.140 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.141 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.141 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.141 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.141 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.141 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: ]] 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.401 nvme0n1 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: ]] 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:15.401 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.402 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.402 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:15.402 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:15.402 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.402 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:15.402 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.402 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.662 nvme0n1 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: ]] 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.662 21:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.922 nvme0n1 00:28:15.922 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.922 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.922 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.922 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.922 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.922 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.922 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.922 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.922 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.922 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.922 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.922 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.922 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:15.922 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.922 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.922 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:15.922 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:15.922 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:15.922 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:15.922 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: ]] 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.923 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.185 nvme0n1 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.185 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.460 nvme0n1 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: ]] 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.460 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.725 nvme0n1 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: ]] 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.725 21:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.725 nvme0n1 00:28:16.725 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.725 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.725 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: ]] 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.986 nvme0n1 00:28:16.986 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: ]] 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.248 nvme0n1 00:28:17.248 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:17.509 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.510 nvme0n1 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.510 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: ]] 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.771 21:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.032 nvme0n1 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: ]] 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.032 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.293 nvme0n1 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: ]] 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.293 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.294 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.294 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.554 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:18.554 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.554 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.554 nvme0n1 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: ]] 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.815 21:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.076 nvme0n1 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.076 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.337 nvme0n1 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:19.337 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:19.338 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: ]] 00:28:19.338 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:19.338 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:19.338 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.338 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:19.338 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:19.338 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:19.338 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.338 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:19.338 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.338 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.338 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.598 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.598 21:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.598 21:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.598 21:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.598 21:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.598 21:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.598 21:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.598 21:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.598 21:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.598 21:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.598 21:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.598 21:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:19.598 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.598 21:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.859 nvme0n1 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: ]] 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.859 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.120 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.120 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.120 21:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.120 21:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.120 21:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.120 21:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.120 21:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.120 21:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.120 21:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.120 21:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.120 21:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.120 21:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.120 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:20.120 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.120 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.381 nvme0n1 00:28:20.381 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.381 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.381 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.381 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.381 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.381 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.381 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.381 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.381 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.381 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: ]] 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.642 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.643 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.643 21:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.643 21:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.643 21:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.643 21:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.643 21:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.643 21:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.643 21:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.643 21:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.643 21:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.643 21:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.643 21:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:20.643 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.643 21:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.904 nvme0n1 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: ]] 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.904 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:20.905 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.905 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.905 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.905 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.905 21:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.905 21:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.905 21:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.905 21:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.905 21:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.905 21:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.905 21:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.905 21:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.905 21:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.905 21:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.905 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:20.905 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.905 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.477 nvme0n1 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.477 21:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.478 21:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.478 21:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.478 21:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.478 21:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.478 21:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:21.478 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.478 21:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.050 nvme0n1 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg3NTZjZGY2MTNmZmY2MTljNTVlZWU2ZGZkNjBhNDaZeBPS: 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: ]] 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzU0MGE4NDg5YmVhODI1ZmE0OWVmNjY2NDJmOWRjODQ4N2NkNzJmNWJmZTFmODEyYzBiNjAyODAxZmVmMmQzOIPMDDs=: 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.050 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.993 nvme0n1 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: ]] 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.993 21:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.993 21:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.993 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.993 21:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.993 21:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.993 21:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.994 21:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.994 21:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.994 21:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.994 21:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.994 21:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.994 21:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.994 21:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.994 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:22.994 21:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.994 21:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.565 nvme0n1 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU1ZWE4OGJlMGNhZTRhNzBiNDk5NmIzMjg1NjlkM2RenWt3: 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: ]] 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDc3ZTc4NGUzYmM4YTA4OTVkY2RmMTViZDgxYWY3OGLkWUBu: 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.565 21:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.508 nvme0n1 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDg3ZGE1MWY3M2QyOTBhNTBkZjQ4ZDI0ZGE3MDdmY2MzZmE3OTc4NjllOTVmNWE3CBryQg==: 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: ]] 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I3MWMzYWMzZmRmY2U5MTg2Mzc3ZjEwMWM4YTUxZDlSQ3f1: 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:24.508 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:24.509 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:24.509 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.509 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:24.509 21:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.509 21:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.509 21:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.509 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.509 21:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.509 21:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.509 21:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.509 21:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.509 21:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.509 21:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.509 21:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.509 21:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.509 21:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.509 21:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.509 21:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:24.509 21:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.509 21:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.080 nvme0n1 00:28:25.080 21:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.080 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.080 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.080 21:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.080 21:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.080 21:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.342 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.342 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.342 21:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.342 21:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.342 21:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.342 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.342 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:25.342 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.342 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:25.342 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:25.342 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:25.342 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:25.342 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:25.342 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:25.342 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNjNmNWI4ODVkNTkwZTRhNDBiNWJmZGVhZTNjNzY0OTVlOGFmYmRlNzRhZjJiODE1MWFlNjdmY2ExMmI2ZXN0QTY=: 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.343 21:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.916 nvme0n1 00:28:25.916 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.916 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.916 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.916 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.916 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.916 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.916 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.916 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.916 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.916 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlmNDI3M2UwMGRjYzJjZTc5YjU2NzVmM2Y1OTM1NWUzOWY0NjBlNmM3ZDNhNmU3g0sWtA==: 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: ]] 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTdiYjVjNzZhZTAxMjdjYzNlYWNlZGFkYmRkNWUxNDlmNzc0ZTVkODY4NGExMDA0X0f/Mg==: 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.176 request: 00:28:26.176 { 00:28:26.176 "name": "nvme0", 00:28:26.176 "trtype": "tcp", 00:28:26.176 "traddr": "10.0.0.1", 00:28:26.176 "adrfam": "ipv4", 00:28:26.176 "trsvcid": "4420", 00:28:26.176 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:26.176 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:26.176 "prchk_reftag": false, 00:28:26.176 "prchk_guard": false, 00:28:26.176 "hdgst": false, 00:28:26.176 "ddgst": false, 00:28:26.176 "method": "bdev_nvme_attach_controller", 00:28:26.176 "req_id": 1 00:28:26.176 } 00:28:26.176 Got JSON-RPC error response 00:28:26.176 response: 00:28:26.176 { 00:28:26.176 "code": -5, 00:28:26.176 "message": "Input/output error" 00:28:26.176 } 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.176 request: 00:28:26.176 { 00:28:26.176 "name": "nvme0", 00:28:26.176 "trtype": "tcp", 00:28:26.176 "traddr": "10.0.0.1", 00:28:26.176 "adrfam": "ipv4", 00:28:26.176 "trsvcid": "4420", 00:28:26.176 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:26.176 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:26.176 "prchk_reftag": false, 00:28:26.176 "prchk_guard": false, 00:28:26.176 "hdgst": false, 00:28:26.176 "ddgst": false, 00:28:26.176 "dhchap_key": "key2", 00:28:26.176 "method": "bdev_nvme_attach_controller", 00:28:26.176 "req_id": 1 00:28:26.176 } 00:28:26.176 Got JSON-RPC error response 00:28:26.176 response: 00:28:26.176 { 00:28:26.176 "code": -5, 00:28:26.176 "message": "Input/output error" 00:28:26.176 } 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:26.176 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.177 request: 00:28:26.177 { 00:28:26.177 "name": "nvme0", 00:28:26.177 "trtype": "tcp", 00:28:26.177 "traddr": "10.0.0.1", 00:28:26.177 "adrfam": "ipv4", 00:28:26.177 "trsvcid": "4420", 00:28:26.177 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:26.177 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:26.177 "prchk_reftag": false, 00:28:26.177 "prchk_guard": false, 00:28:26.177 "hdgst": false, 00:28:26.177 "ddgst": false, 00:28:26.177 "dhchap_key": "key1", 00:28:26.177 "dhchap_ctrlr_key": "ckey2", 00:28:26.177 "method": "bdev_nvme_attach_controller", 00:28:26.177 "req_id": 1 00:28:26.177 } 00:28:26.177 Got JSON-RPC error response 00:28:26.177 response: 00:28:26.177 { 00:28:26.177 "code": -5, 00:28:26.177 "message": "Input/output error" 00:28:26.177 } 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:26.177 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:26.437 rmmod nvme_tcp 00:28:26.437 rmmod nvme_fabrics 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2128623 ']' 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2128623 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2128623 ']' 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2128623 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2128623 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2128623' 00:28:26.437 killing process with pid 2128623 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2128623 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2128623 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:26.437 21:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.985 21:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:28.985 21:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:28.985 21:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:28.985 21:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:28.985 21:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:28.985 21:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:28.985 21:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:28.985 21:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:28.985 21:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:28.985 21:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:28.985 21:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:28.985 21:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:28.985 21:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:33.185 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:33.185 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:33.185 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:33.185 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:33.185 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:33.185 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:33.185 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:33.185 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:33.185 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:33.185 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:33.185 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:33.185 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:33.185 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:33.185 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:33.185 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:33.185 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:33.185 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:33.185 21:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.FTF /tmp/spdk.key-null.tV4 /tmp/spdk.key-sha256.5Ci /tmp/spdk.key-sha384.mBc /tmp/spdk.key-sha512.AQD /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:33.185 21:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:36.484 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:36.484 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:36.484 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:36.484 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:36.484 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:36.484 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:36.484 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:36.484 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:36.484 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:36.484 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:36.484 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:36.484 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:36.484 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:36.484 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:36.484 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:36.484 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:36.484 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:36.743 00:28:36.743 real 0m59.124s 00:28:36.743 user 0m52.065s 00:28:36.743 sys 0m16.256s 00:28:36.743 21:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:36.743 21:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.743 ************************************ 00:28:36.743 END TEST nvmf_auth_host 00:28:36.743 ************************************ 00:28:36.743 21:19:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:36.743 21:19:03 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:28:36.743 21:19:03 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:36.743 21:19:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:36.743 21:19:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:36.743 21:19:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:36.743 ************************************ 00:28:36.743 START TEST nvmf_digest 00:28:36.743 ************************************ 00:28:36.743 21:19:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:37.003 * Looking for test storage... 00:28:37.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:37.003 21:19:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:45.136 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.136 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:45.136 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:45.136 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:45.136 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:45.136 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:45.136 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:45.136 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:45.136 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:45.136 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:45.136 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:45.136 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:45.136 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:45.136 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:45.136 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:45.136 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.136 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.136 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:45.137 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:45.137 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:45.137 Found net devices under 0000:31:00.0: cvl_0_0 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:45.137 Found net devices under 0000:31:00.1: cvl_0_1 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:45.137 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:45.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.520 ms 00:28:45.397 00:28:45.397 --- 10.0.0.2 ping statistics --- 00:28:45.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.397 rtt min/avg/max/mdev = 0.520/0.520/0.520/0.000 ms 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:45.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:28:45.397 00:28:45.397 --- 10.0.0.1 ping statistics --- 00:28:45.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.397 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:45.397 ************************************ 00:28:45.397 START TEST nvmf_digest_clean 00:28:45.397 ************************************ 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2146071 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2146071 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2146071 ']' 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:45.397 21:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:45.397 [2024-07-15 21:19:12.571853] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:28:45.397 [2024-07-15 21:19:12.571915] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.397 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.397 [2024-07-15 21:19:12.651907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.657 [2024-07-15 21:19:12.725860] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.657 [2024-07-15 21:19:12.725902] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.657 [2024-07-15 21:19:12.725910] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.657 [2024-07-15 21:19:12.725916] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.657 [2024-07-15 21:19:12.725922] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.657 [2024-07-15 21:19:12.725941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:46.226 null0 00:28:46.226 [2024-07-15 21:19:13.444623] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.226 [2024-07-15 21:19:13.468811] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2146279 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2146279 /var/tmp/bperf.sock 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2146279 ']' 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:46.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:46.226 21:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:46.485 [2024-07-15 21:19:13.524644] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:28:46.485 [2024-07-15 21:19:13.524690] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146279 ] 00:28:46.485 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.485 [2024-07-15 21:19:13.606037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.485 [2024-07-15 21:19:13.670221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.054 21:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:47.054 21:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:47.054 21:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:47.054 21:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:47.054 21:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:47.312 21:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.312 21:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.570 nvme0n1 00:28:47.829 21:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:47.829 21:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:47.829 Running I/O for 2 seconds... 00:28:49.730 00:28:49.730 Latency(us) 00:28:49.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.730 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:49.730 nvme0n1 : 2.00 20773.06 81.14 0.00 0.00 6155.19 3194.88 19660.80 00:28:49.730 =================================================================================================================== 00:28:49.730 Total : 20773.06 81.14 0.00 0.00 6155.19 3194.88 19660.80 00:28:49.730 0 00:28:49.730 21:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:49.730 21:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:49.730 21:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:49.730 21:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:49.730 | select(.opcode=="crc32c") 00:28:49.730 | "\(.module_name) \(.executed)"' 00:28:49.730 21:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:50.037 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:50.037 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:50.037 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:50.037 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:50.037 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2146279 00:28:50.037 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2146279 ']' 00:28:50.037 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2146279 00:28:50.037 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:50.037 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:50.037 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2146279 00:28:50.037 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:50.037 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:50.037 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2146279' 00:28:50.037 killing process with pid 2146279 00:28:50.037 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2146279 00:28:50.037 Received shutdown signal, test time was about 2.000000 seconds 00:28:50.037 00:28:50.037 Latency(us) 00:28:50.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.037 =================================================================================================================== 00:28:50.037 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:50.037 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2146279 00:28:50.037 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:50.037 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:50.037 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:50.038 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:50.038 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:50.038 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:50.038 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:50.038 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2146959 00:28:50.038 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2146959 /var/tmp/bperf.sock 00:28:50.038 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2146959 ']' 00:28:50.038 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:50.038 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:50.038 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:50.038 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:50.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:50.038 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:50.038 21:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:50.296 [2024-07-15 21:19:17.361338] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:28:50.296 [2024-07-15 21:19:17.361397] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146959 ] 00:28:50.297 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:50.297 Zero copy mechanism will not be used. 00:28:50.297 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.297 [2024-07-15 21:19:17.442019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.297 [2024-07-15 21:19:17.495183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.865 21:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:50.865 21:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:50.865 21:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:50.865 21:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:50.865 21:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:51.125 21:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:51.125 21:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:51.389 nvme0n1 00:28:51.728 21:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:51.728 21:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:51.728 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:51.728 Zero copy mechanism will not be used. 00:28:51.728 Running I/O for 2 seconds... 00:28:53.644 00:28:53.644 Latency(us) 00:28:53.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.644 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:53.644 nvme0n1 : 2.00 3033.06 379.13 0.00 0.00 5272.26 1372.16 12615.68 00:28:53.644 =================================================================================================================== 00:28:53.644 Total : 3033.06 379.13 0.00 0.00 5272.26 1372.16 12615.68 00:28:53.644 0 00:28:53.644 21:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:53.644 21:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:53.645 21:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:53.645 21:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:53.645 | select(.opcode=="crc32c") 00:28:53.645 | "\(.module_name) \(.executed)"' 00:28:53.645 21:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:53.905 21:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:53.905 21:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:53.905 21:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:53.905 21:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:53.905 21:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2146959 00:28:53.905 21:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2146959 ']' 00:28:53.905 21:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2146959 00:28:53.905 21:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:53.905 21:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:53.905 21:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2146959 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2146959' 00:28:53.905 killing process with pid 2146959 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2146959 00:28:53.905 Received shutdown signal, test time was about 2.000000 seconds 00:28:53.905 00:28:53.905 Latency(us) 00:28:53.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.905 =================================================================================================================== 00:28:53.905 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2146959 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2147711 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2147711 /var/tmp/bperf.sock 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2147711 ']' 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:53.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:53.905 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:53.905 [2024-07-15 21:19:21.193528] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:28:53.905 [2024-07-15 21:19:21.193627] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2147711 ] 00:28:54.166 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.166 [2024-07-15 21:19:21.277443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.166 [2024-07-15 21:19:21.331055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.737 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:54.737 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:54.737 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:54.737 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:54.737 21:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:54.998 21:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.998 21:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:55.258 nvme0n1 00:28:55.258 21:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:55.258 21:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:55.258 Running I/O for 2 seconds... 00:28:57.827 00:28:57.827 Latency(us) 00:28:57.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.828 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:57.828 nvme0n1 : 2.01 21987.14 85.89 0.00 0.00 5813.87 3822.93 14199.47 00:28:57.828 =================================================================================================================== 00:28:57.828 Total : 21987.14 85.89 0.00 0.00 5813.87 3822.93 14199.47 00:28:57.828 0 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:57.828 | select(.opcode=="crc32c") 00:28:57.828 | "\(.module_name) \(.executed)"' 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2147711 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2147711 ']' 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2147711 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2147711 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2147711' 00:28:57.828 killing process with pid 2147711 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2147711 00:28:57.828 Received shutdown signal, test time was about 2.000000 seconds 00:28:57.828 00:28:57.828 Latency(us) 00:28:57.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.828 =================================================================================================================== 00:28:57.828 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2147711 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2148514 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2148514 /var/tmp/bperf.sock 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2148514 ']' 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:57.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:57.828 21:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:57.828 [2024-07-15 21:19:24.943061] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:28:57.828 [2024-07-15 21:19:24.943120] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2148514 ] 00:28:57.828 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:57.828 Zero copy mechanism will not be used. 00:28:57.828 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.828 [2024-07-15 21:19:25.022722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.828 [2024-07-15 21:19:25.076345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.771 21:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:58.771 21:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:58.771 21:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:58.771 21:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:58.771 21:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:58.771 21:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:58.771 21:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:59.032 nvme0n1 00:28:59.032 21:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:59.032 21:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:59.032 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:59.032 Zero copy mechanism will not be used. 00:28:59.032 Running I/O for 2 seconds... 00:29:00.946 00:29:00.946 Latency(us) 00:29:00.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.946 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:00.946 nvme0n1 : 2.00 4532.77 566.60 0.00 0.00 3523.56 1645.23 11414.19 00:29:00.946 =================================================================================================================== 00:29:00.946 Total : 4532.77 566.60 0.00 0.00 3523.56 1645.23 11414.19 00:29:00.946 0 00:29:01.208 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:01.208 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:01.208 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:01.208 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:01.208 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:01.208 | select(.opcode=="crc32c") 00:29:01.208 | "\(.module_name) \(.executed)"' 00:29:01.208 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:01.208 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:01.208 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:01.208 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:01.208 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2148514 00:29:01.208 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2148514 ']' 00:29:01.208 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2148514 00:29:01.208 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:01.208 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:01.208 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2148514 00:29:01.208 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:01.208 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:01.208 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2148514' 00:29:01.208 killing process with pid 2148514 00:29:01.208 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2148514 00:29:01.208 Received shutdown signal, test time was about 2.000000 seconds 00:29:01.208 00:29:01.208 Latency(us) 00:29:01.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.208 =================================================================================================================== 00:29:01.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:01.208 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2148514 00:29:01.470 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2146071 00:29:01.470 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2146071 ']' 00:29:01.470 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2146071 00:29:01.470 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:01.470 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:01.470 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2146071 00:29:01.470 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:01.470 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:01.470 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2146071' 00:29:01.470 killing process with pid 2146071 00:29:01.470 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2146071 00:29:01.470 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2146071 00:29:01.732 00:29:01.732 real 0m16.260s 00:29:01.732 user 0m31.861s 00:29:01.732 sys 0m3.382s 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:01.732 ************************************ 00:29:01.732 END TEST nvmf_digest_clean 00:29:01.732 ************************************ 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:01.732 ************************************ 00:29:01.732 START TEST nvmf_digest_error 00:29:01.732 ************************************ 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2149360 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2149360 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2149360 ']' 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:01.732 21:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:01.732 [2024-07-15 21:19:28.907271] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:29:01.732 [2024-07-15 21:19:28.907323] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.732 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.732 [2024-07-15 21:19:28.982045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.993 [2024-07-15 21:19:29.053296] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:01.993 [2024-07-15 21:19:29.053336] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:01.993 [2024-07-15 21:19:29.053343] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:01.993 [2024-07-15 21:19:29.053349] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:01.993 [2024-07-15 21:19:29.053359] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:01.993 [2024-07-15 21:19:29.053377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:02.564 [2024-07-15 21:19:29.715282] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:02.564 null0 00:29:02.564 [2024-07-15 21:19:29.796089] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:02.564 [2024-07-15 21:19:29.820286] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2149416 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2149416 /var/tmp/bperf.sock 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2149416 ']' 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:02.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:02.564 21:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:02.825 [2024-07-15 21:19:29.876028] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:29:02.825 [2024-07-15 21:19:29.876078] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2149416 ] 00:29:02.825 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.825 [2024-07-15 21:19:29.955496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.825 [2024-07-15 21:19:30.010094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.399 21:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:03.399 21:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:03.399 21:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:03.399 21:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:03.660 21:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:03.660 21:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.660 21:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:03.660 21:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.660 21:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:03.660 21:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:03.921 nvme0n1 00:29:03.921 21:19:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:03.921 21:19:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.921 21:19:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:03.921 21:19:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.921 21:19:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:03.921 21:19:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:03.921 Running I/O for 2 seconds... 00:29:03.921 [2024-07-15 21:19:31.177954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:03.921 [2024-07-15 21:19:31.177985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-07-15 21:19:31.177995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.921 [2024-07-15 21:19:31.188619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:03.921 [2024-07-15 21:19:31.188640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-07-15 21:19:31.188647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.921 [2024-07-15 21:19:31.203034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:03.921 [2024-07-15 21:19:31.203053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-07-15 21:19:31.203060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.214074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.214096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.214102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.227463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.227481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.227487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.240052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.240070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.240077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.252037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.252053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.252060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.263471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.263488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.263494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.275652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.275669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.275675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.287922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.287939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.287945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.301029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.301046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.301052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.313102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.313120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.313127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.326022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.326039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.326045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.336399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.336416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.336422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.349063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.349080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.349086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.362529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.362546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.362552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.373976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.373993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.374000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.387515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.387532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.387539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.398946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.398963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.398969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.410699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.410716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.410722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.422722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.422739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.422748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.435277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.435294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.435300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.447631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.447652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.447660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.460171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.460188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.460195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.183 [2024-07-15 21:19:31.472386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.183 [2024-07-15 21:19:31.472404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.183 [2024-07-15 21:19:31.472410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.445 [2024-07-15 21:19:31.483946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.445 [2024-07-15 21:19:31.483963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.445 [2024-07-15 21:19:31.483970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.445 [2024-07-15 21:19:31.496195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.445 [2024-07-15 21:19:31.496213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.445 [2024-07-15 21:19:31.496219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.445 [2024-07-15 21:19:31.508461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.445 [2024-07-15 21:19:31.508479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.445 [2024-07-15 21:19:31.508485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.445 [2024-07-15 21:19:31.521293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.445 [2024-07-15 21:19:31.521310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.445 [2024-07-15 21:19:31.521316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.445 [2024-07-15 21:19:31.533353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.445 [2024-07-15 21:19:31.533374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.445 [2024-07-15 21:19:31.533380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.445 [2024-07-15 21:19:31.544554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.445 [2024-07-15 21:19:31.544571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.445 [2024-07-15 21:19:31.544578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.445 [2024-07-15 21:19:31.556945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.445 [2024-07-15 21:19:31.556962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.445 [2024-07-15 21:19:31.556969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.445 [2024-07-15 21:19:31.569207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.445 [2024-07-15 21:19:31.569224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.445 [2024-07-15 21:19:31.569235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.445 [2024-07-15 21:19:31.581123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.445 [2024-07-15 21:19:31.581139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.445 [2024-07-15 21:19:31.581146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.445 [2024-07-15 21:19:31.594573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.445 [2024-07-15 21:19:31.594589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.445 [2024-07-15 21:19:31.594596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.445 [2024-07-15 21:19:31.604783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.445 [2024-07-15 21:19:31.604800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.445 [2024-07-15 21:19:31.604806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.445 [2024-07-15 21:19:31.617632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.445 [2024-07-15 21:19:31.617649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.445 [2024-07-15 21:19:31.617655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.445 [2024-07-15 21:19:31.630557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.446 [2024-07-15 21:19:31.630575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.446 [2024-07-15 21:19:31.630581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.446 [2024-07-15 21:19:31.644417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.446 [2024-07-15 21:19:31.644434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.446 [2024-07-15 21:19:31.644440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.446 [2024-07-15 21:19:31.656827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.446 [2024-07-15 21:19:31.656844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.446 [2024-07-15 21:19:31.656850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.446 [2024-07-15 21:19:31.667164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.446 [2024-07-15 21:19:31.667180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.446 [2024-07-15 21:19:31.667186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.446 [2024-07-15 21:19:31.679573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.446 [2024-07-15 21:19:31.679590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.446 [2024-07-15 21:19:31.679596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.446 [2024-07-15 21:19:31.693115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.446 [2024-07-15 21:19:31.693131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.446 [2024-07-15 21:19:31.693138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.446 [2024-07-15 21:19:31.704442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.446 [2024-07-15 21:19:31.704458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.446 [2024-07-15 21:19:31.704465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.446 [2024-07-15 21:19:31.715430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.446 [2024-07-15 21:19:31.715447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.446 [2024-07-15 21:19:31.715453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.446 [2024-07-15 21:19:31.729213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.446 [2024-07-15 21:19:31.729234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.446 [2024-07-15 21:19:31.729240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.708 [2024-07-15 21:19:31.741045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.708 [2024-07-15 21:19:31.741062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.708 [2024-07-15 21:19:31.741073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.708 [2024-07-15 21:19:31.752602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.708 [2024-07-15 21:19:31.752619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.708 [2024-07-15 21:19:31.752626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.708 [2024-07-15 21:19:31.765768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.708 [2024-07-15 21:19:31.765785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.708 [2024-07-15 21:19:31.765791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.708 [2024-07-15 21:19:31.778765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.708 [2024-07-15 21:19:31.778781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.708 [2024-07-15 21:19:31.778787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.708 [2024-07-15 21:19:31.790850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.708 [2024-07-15 21:19:31.790867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.708 [2024-07-15 21:19:31.790873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.708 [2024-07-15 21:19:31.803837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.708 [2024-07-15 21:19:31.803854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.708 [2024-07-15 21:19:31.803859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.709 [2024-07-15 21:19:31.814527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.709 [2024-07-15 21:19:31.814544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.709 [2024-07-15 21:19:31.814550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.709 [2024-07-15 21:19:31.827338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.709 [2024-07-15 21:19:31.827355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.709 [2024-07-15 21:19:31.827361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.709 [2024-07-15 21:19:31.839977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.709 [2024-07-15 21:19:31.839994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.709 [2024-07-15 21:19:31.840000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.709 [2024-07-15 21:19:31.851907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.709 [2024-07-15 21:19:31.851925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.709 [2024-07-15 21:19:31.851931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.709 [2024-07-15 21:19:31.864778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.709 [2024-07-15 21:19:31.864795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.709 [2024-07-15 21:19:31.864801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.709 [2024-07-15 21:19:31.877374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.709 [2024-07-15 21:19:31.877391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.709 [2024-07-15 21:19:31.877397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.709 [2024-07-15 21:19:31.887703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.709 [2024-07-15 21:19:31.887720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.709 [2024-07-15 21:19:31.887726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.709 [2024-07-15 21:19:31.901507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.709 [2024-07-15 21:19:31.901524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.709 [2024-07-15 21:19:31.901530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.709 [2024-07-15 21:19:31.914686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.709 [2024-07-15 21:19:31.914703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.709 [2024-07-15 21:19:31.914709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.709 [2024-07-15 21:19:31.926081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.709 [2024-07-15 21:19:31.926098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.709 [2024-07-15 21:19:31.926104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.709 [2024-07-15 21:19:31.939639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.709 [2024-07-15 21:19:31.939656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.709 [2024-07-15 21:19:31.939661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.709 [2024-07-15 21:19:31.951463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.709 [2024-07-15 21:19:31.951481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.709 [2024-07-15 21:19:31.951490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.709 [2024-07-15 21:19:31.962425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.709 [2024-07-15 21:19:31.962442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.709 [2024-07-15 21:19:31.962449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.709 [2024-07-15 21:19:31.975560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.709 [2024-07-15 21:19:31.975576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.709 [2024-07-15 21:19:31.975582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.709 [2024-07-15 21:19:31.986873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.709 [2024-07-15 21:19:31.986890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.709 [2024-07-15 21:19:31.986896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.971 [2024-07-15 21:19:31.998984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.971 [2024-07-15 21:19:31.999001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.971 [2024-07-15 21:19:31.999007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.971 [2024-07-15 21:19:32.011821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.971 [2024-07-15 21:19:32.011838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.971 [2024-07-15 21:19:32.011844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.971 [2024-07-15 21:19:32.024844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.971 [2024-07-15 21:19:32.024861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.971 [2024-07-15 21:19:32.024867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.971 [2024-07-15 21:19:32.035563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.971 [2024-07-15 21:19:32.035580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.971 [2024-07-15 21:19:32.035586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.971 [2024-07-15 21:19:32.049345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.971 [2024-07-15 21:19:32.049362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.971 [2024-07-15 21:19:32.049369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.971 [2024-07-15 21:19:32.062455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.971 [2024-07-15 21:19:32.062476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.971 [2024-07-15 21:19:32.062482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.971 [2024-07-15 21:19:32.074253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.971 [2024-07-15 21:19:32.074270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.971 [2024-07-15 21:19:32.074276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.971 [2024-07-15 21:19:32.086351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.971 [2024-07-15 21:19:32.086369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.971 [2024-07-15 21:19:32.086375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.971 [2024-07-15 21:19:32.097504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.971 [2024-07-15 21:19:32.097521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.971 [2024-07-15 21:19:32.097527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.971 [2024-07-15 21:19:32.110391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.971 [2024-07-15 21:19:32.110408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.971 [2024-07-15 21:19:32.110414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.971 [2024-07-15 21:19:32.122790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.971 [2024-07-15 21:19:32.122807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.971 [2024-07-15 21:19:32.122813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.971 [2024-07-15 21:19:32.135013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.971 [2024-07-15 21:19:32.135030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.971 [2024-07-15 21:19:32.135036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.971 [2024-07-15 21:19:32.148183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.971 [2024-07-15 21:19:32.148200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.971 [2024-07-15 21:19:32.148206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.971 [2024-07-15 21:19:32.160881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.971 [2024-07-15 21:19:32.160897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.971 [2024-07-15 21:19:32.160904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.971 [2024-07-15 21:19:32.172331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.971 [2024-07-15 21:19:32.172348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.971 [2024-07-15 21:19:32.172354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.971 [2024-07-15 21:19:32.185724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.971 [2024-07-15 21:19:32.185740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.971 [2024-07-15 21:19:32.185747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.971 [2024-07-15 21:19:32.196290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.971 [2024-07-15 21:19:32.196306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.971 [2024-07-15 21:19:32.196312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.971 [2024-07-15 21:19:32.208351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.971 [2024-07-15 21:19:32.208367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.971 [2024-07-15 21:19:32.208373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.971 [2024-07-15 21:19:32.221085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.971 [2024-07-15 21:19:32.221101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.971 [2024-07-15 21:19:32.221107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.972 [2024-07-15 21:19:32.233748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.972 [2024-07-15 21:19:32.233765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.972 [2024-07-15 21:19:32.233771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.972 [2024-07-15 21:19:32.246215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.972 [2024-07-15 21:19:32.246236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.972 [2024-07-15 21:19:32.246243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.972 [2024-07-15 21:19:32.257047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:04.972 [2024-07-15 21:19:32.257063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.972 [2024-07-15 21:19:32.257069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.232 [2024-07-15 21:19:32.271054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.232 [2024-07-15 21:19:32.271071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.232 [2024-07-15 21:19:32.271080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.232 [2024-07-15 21:19:32.282400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.232 [2024-07-15 21:19:32.282417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.232 [2024-07-15 21:19:32.282423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.232 [2024-07-15 21:19:32.293810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.232 [2024-07-15 21:19:32.293826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.232 [2024-07-15 21:19:32.293832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.232 [2024-07-15 21:19:32.306022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.232 [2024-07-15 21:19:32.306038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.232 [2024-07-15 21:19:32.306044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.232 [2024-07-15 21:19:32.318497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.232 [2024-07-15 21:19:32.318513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.232 [2024-07-15 21:19:32.318519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.232 [2024-07-15 21:19:32.332020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.232 [2024-07-15 21:19:32.332037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.232 [2024-07-15 21:19:32.332043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.232 [2024-07-15 21:19:32.344849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.232 [2024-07-15 21:19:32.344865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.232 [2024-07-15 21:19:32.344871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.232 [2024-07-15 21:19:32.356636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.232 [2024-07-15 21:19:32.356652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.232 [2024-07-15 21:19:32.356658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.232 [2024-07-15 21:19:32.368170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.232 [2024-07-15 21:19:32.368186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.232 [2024-07-15 21:19:32.368193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.232 [2024-07-15 21:19:32.381066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.232 [2024-07-15 21:19:32.381082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.232 [2024-07-15 21:19:32.381088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.232 [2024-07-15 21:19:32.391980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.232 [2024-07-15 21:19:32.391997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.232 [2024-07-15 21:19:32.392003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.232 [2024-07-15 21:19:32.405356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.232 [2024-07-15 21:19:32.405373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.232 [2024-07-15 21:19:32.405379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.232 [2024-07-15 21:19:32.417374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.233 [2024-07-15 21:19:32.417391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.233 [2024-07-15 21:19:32.417397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.233 [2024-07-15 21:19:32.430507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.233 [2024-07-15 21:19:32.430523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.233 [2024-07-15 21:19:32.430529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.233 [2024-07-15 21:19:32.441217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.233 [2024-07-15 21:19:32.441238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.233 [2024-07-15 21:19:32.441245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.233 [2024-07-15 21:19:32.454884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.233 [2024-07-15 21:19:32.454901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.233 [2024-07-15 21:19:32.454907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.233 [2024-07-15 21:19:32.464861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.233 [2024-07-15 21:19:32.464877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.233 [2024-07-15 21:19:32.464883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.233 [2024-07-15 21:19:32.478256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.233 [2024-07-15 21:19:32.478272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.233 [2024-07-15 21:19:32.478281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.233 [2024-07-15 21:19:32.491368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.233 [2024-07-15 21:19:32.491385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.233 [2024-07-15 21:19:32.491391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.233 [2024-07-15 21:19:32.503290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.233 [2024-07-15 21:19:32.503306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.233 [2024-07-15 21:19:32.503313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.233 [2024-07-15 21:19:32.514763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.233 [2024-07-15 21:19:32.514779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.233 [2024-07-15 21:19:32.514785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.493 [2024-07-15 21:19:32.528431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.493 [2024-07-15 21:19:32.528448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.493 [2024-07-15 21:19:32.528455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.493 [2024-07-15 21:19:32.539881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.493 [2024-07-15 21:19:32.539899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.493 [2024-07-15 21:19:32.539905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.493 [2024-07-15 21:19:32.551392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.493 [2024-07-15 21:19:32.551408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.493 [2024-07-15 21:19:32.551415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.493 [2024-07-15 21:19:32.564990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.493 [2024-07-15 21:19:32.565007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.493 [2024-07-15 21:19:32.565014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.493 [2024-07-15 21:19:32.577734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.493 [2024-07-15 21:19:32.577751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.493 [2024-07-15 21:19:32.577757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.493 [2024-07-15 21:19:32.589903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.493 [2024-07-15 21:19:32.589922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.493 [2024-07-15 21:19:32.589928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.493 [2024-07-15 21:19:32.602891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.493 [2024-07-15 21:19:32.602907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.493 [2024-07-15 21:19:32.602913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.493 [2024-07-15 21:19:32.613181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.493 [2024-07-15 21:19:32.613198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.493 [2024-07-15 21:19:32.613204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.493 [2024-07-15 21:19:32.626404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.493 [2024-07-15 21:19:32.626421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.493 [2024-07-15 21:19:32.626427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.493 [2024-07-15 21:19:32.637150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.493 [2024-07-15 21:19:32.637167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.493 [2024-07-15 21:19:32.637173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.493 [2024-07-15 21:19:32.651429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.493 [2024-07-15 21:19:32.651445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.493 [2024-07-15 21:19:32.651452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.493 [2024-07-15 21:19:32.663265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.493 [2024-07-15 21:19:32.663282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.493 [2024-07-15 21:19:32.663288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.493 [2024-07-15 21:19:32.674493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.493 [2024-07-15 21:19:32.674510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.493 [2024-07-15 21:19:32.674516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.494 [2024-07-15 21:19:32.686748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.494 [2024-07-15 21:19:32.686765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.494 [2024-07-15 21:19:32.686771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.494 [2024-07-15 21:19:32.700580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.494 [2024-07-15 21:19:32.700596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.494 [2024-07-15 21:19:32.700603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.494 [2024-07-15 21:19:32.711954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.494 [2024-07-15 21:19:32.711970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.494 [2024-07-15 21:19:32.711976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.494 [2024-07-15 21:19:32.724484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.494 [2024-07-15 21:19:32.724501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.494 [2024-07-15 21:19:32.724507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.494 [2024-07-15 21:19:32.737567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.494 [2024-07-15 21:19:32.737584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.494 [2024-07-15 21:19:32.737590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.494 [2024-07-15 21:19:32.748902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.494 [2024-07-15 21:19:32.748919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.494 [2024-07-15 21:19:32.748925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.494 [2024-07-15 21:19:32.761346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.494 [2024-07-15 21:19:32.761362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.494 [2024-07-15 21:19:32.761368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.494 [2024-07-15 21:19:32.773026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.494 [2024-07-15 21:19:32.773042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.494 [2024-07-15 21:19:32.773048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.754 [2024-07-15 21:19:32.785425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.754 [2024-07-15 21:19:32.785442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.754 [2024-07-15 21:19:32.785448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.754 [2024-07-15 21:19:32.798434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.754 [2024-07-15 21:19:32.798450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.754 [2024-07-15 21:19:32.798459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.754 [2024-07-15 21:19:32.808109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.754 [2024-07-15 21:19:32.808126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.754 [2024-07-15 21:19:32.808132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.754 [2024-07-15 21:19:32.822631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.754 [2024-07-15 21:19:32.822648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.754 [2024-07-15 21:19:32.822654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.754 [2024-07-15 21:19:32.835800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.754 [2024-07-15 21:19:32.835817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.754 [2024-07-15 21:19:32.835823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.754 [2024-07-15 21:19:32.847142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.754 [2024-07-15 21:19:32.847159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.755 [2024-07-15 21:19:32.847165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.755 [2024-07-15 21:19:32.859169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.755 [2024-07-15 21:19:32.859185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.755 [2024-07-15 21:19:32.859191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.755 [2024-07-15 21:19:32.871798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.755 [2024-07-15 21:19:32.871815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.755 [2024-07-15 21:19:32.871821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.755 [2024-07-15 21:19:32.884805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.755 [2024-07-15 21:19:32.884822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.755 [2024-07-15 21:19:32.884829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.755 [2024-07-15 21:19:32.894961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.755 [2024-07-15 21:19:32.894977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.755 [2024-07-15 21:19:32.894983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.755 [2024-07-15 21:19:32.908660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.755 [2024-07-15 21:19:32.908679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.755 [2024-07-15 21:19:32.908685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.755 [2024-07-15 21:19:32.919758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.755 [2024-07-15 21:19:32.919774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.755 [2024-07-15 21:19:32.919780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.755 [2024-07-15 21:19:32.933349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.755 [2024-07-15 21:19:32.933365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.755 [2024-07-15 21:19:32.933371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.755 [2024-07-15 21:19:32.945434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.755 [2024-07-15 21:19:32.945451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.755 [2024-07-15 21:19:32.945457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.755 [2024-07-15 21:19:32.958360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.755 [2024-07-15 21:19:32.958377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.755 [2024-07-15 21:19:32.958383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.755 [2024-07-15 21:19:32.971204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.755 [2024-07-15 21:19:32.971220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.755 [2024-07-15 21:19:32.971226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.755 [2024-07-15 21:19:32.982698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.755 [2024-07-15 21:19:32.982715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.755 [2024-07-15 21:19:32.982721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.755 [2024-07-15 21:19:32.993886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.755 [2024-07-15 21:19:32.993902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.755 [2024-07-15 21:19:32.993908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.755 [2024-07-15 21:19:33.006304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.755 [2024-07-15 21:19:33.006320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.755 [2024-07-15 21:19:33.006330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.755 [2024-07-15 21:19:33.019274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.755 [2024-07-15 21:19:33.019291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.755 [2024-07-15 21:19:33.019297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.755 [2024-07-15 21:19:33.032329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.755 [2024-07-15 21:19:33.032346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.755 [2024-07-15 21:19:33.032352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.755 [2024-07-15 21:19:33.043585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:05.755 [2024-07-15 21:19:33.043602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.755 [2024-07-15 21:19:33.043608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.015 [2024-07-15 21:19:33.056166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:06.015 [2024-07-15 21:19:33.056182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.015 [2024-07-15 21:19:33.056189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.015 [2024-07-15 21:19:33.069186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:06.015 [2024-07-15 21:19:33.069203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.015 [2024-07-15 21:19:33.069209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.015 [2024-07-15 21:19:33.081226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:06.015 [2024-07-15 21:19:33.081245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.015 [2024-07-15 21:19:33.081251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.015 [2024-07-15 21:19:33.091841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:06.015 [2024-07-15 21:19:33.091858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.015 [2024-07-15 21:19:33.091864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.015 [2024-07-15 21:19:33.105151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:06.015 [2024-07-15 21:19:33.105168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.015 [2024-07-15 21:19:33.105174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.015 [2024-07-15 21:19:33.117082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:06.015 [2024-07-15 21:19:33.117102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.015 [2024-07-15 21:19:33.117108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.015 [2024-07-15 21:19:33.129755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:06.015 [2024-07-15 21:19:33.129772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.015 [2024-07-15 21:19:33.129778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.015 [2024-07-15 21:19:33.141396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:06.015 [2024-07-15 21:19:33.141413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.015 [2024-07-15 21:19:33.141419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.015 [2024-07-15 21:19:33.154138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d68a70) 00:29:06.015 [2024-07-15 21:19:33.154155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.015 [2024-07-15 21:19:33.154162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.015 00:29:06.015 Latency(us) 00:29:06.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.015 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:06.015 nvme0n1 : 2.00 20732.40 80.99 0.00 0.00 6167.47 3181.23 17913.17 00:29:06.015 =================================================================================================================== 00:29:06.015 Total : 20732.40 80.99 0.00 0.00 6167.47 3181.23 17913.17 00:29:06.015 0 00:29:06.015 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:06.015 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:06.015 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:06.015 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:06.015 | .driver_specific 00:29:06.015 | .nvme_error 00:29:06.015 | .status_code 00:29:06.015 | .command_transient_transport_error' 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2149416 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2149416 ']' 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2149416 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2149416 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2149416' 00:29:06.276 killing process with pid 2149416 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2149416 00:29:06.276 Received shutdown signal, test time was about 2.000000 seconds 00:29:06.276 00:29:06.276 Latency(us) 00:29:06.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.276 =================================================================================================================== 00:29:06.276 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2149416 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2150176 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2150176 /var/tmp/bperf.sock 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2150176 ']' 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:06.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:06.276 21:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:06.536 [2024-07-15 21:19:33.569936] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:29:06.536 [2024-07-15 21:19:33.570030] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2150176 ] 00:29:06.536 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:06.536 Zero copy mechanism will not be used. 00:29:06.536 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.536 [2024-07-15 21:19:33.654641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.536 [2024-07-15 21:19:33.707860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.107 21:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:07.107 21:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:07.107 21:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:07.107 21:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:07.368 21:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:07.368 21:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.368 21:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:07.368 21:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.368 21:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.368 21:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.628 nvme0n1 00:29:07.628 21:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:07.628 21:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.628 21:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:07.628 21:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.628 21:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:07.628 21:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:07.628 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:07.628 Zero copy mechanism will not be used. 00:29:07.628 Running I/O for 2 seconds... 00:29:07.628 [2024-07-15 21:19:34.843418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.628 [2024-07-15 21:19:34.843450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.628 [2024-07-15 21:19:34.843458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.628 [2024-07-15 21:19:34.852017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.628 [2024-07-15 21:19:34.852037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.628 [2024-07-15 21:19:34.852043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.628 [2024-07-15 21:19:34.861309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.628 [2024-07-15 21:19:34.861327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.628 [2024-07-15 21:19:34.861334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.628 [2024-07-15 21:19:34.870129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.628 [2024-07-15 21:19:34.870146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.628 [2024-07-15 21:19:34.870153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.628 [2024-07-15 21:19:34.878480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.628 [2024-07-15 21:19:34.878498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.628 [2024-07-15 21:19:34.878505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.628 [2024-07-15 21:19:34.887595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.628 [2024-07-15 21:19:34.887612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.628 [2024-07-15 21:19:34.887619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.628 [2024-07-15 21:19:34.898497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.628 [2024-07-15 21:19:34.898515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.629 [2024-07-15 21:19:34.898521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.629 [2024-07-15 21:19:34.906654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.629 [2024-07-15 21:19:34.906671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.629 [2024-07-15 21:19:34.906677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.629 [2024-07-15 21:19:34.914700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.629 [2024-07-15 21:19:34.914718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.629 [2024-07-15 21:19:34.914724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.890 [2024-07-15 21:19:34.924382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.890 [2024-07-15 21:19:34.924399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.890 [2024-07-15 21:19:34.924405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.890 [2024-07-15 21:19:34.933403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.890 [2024-07-15 21:19:34.933420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.890 [2024-07-15 21:19:34.933426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.890 [2024-07-15 21:19:34.942058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.890 [2024-07-15 21:19:34.942075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.890 [2024-07-15 21:19:34.942081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.890 [2024-07-15 21:19:34.949802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.890 [2024-07-15 21:19:34.949819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.890 [2024-07-15 21:19:34.949825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.890 [2024-07-15 21:19:34.958874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.890 [2024-07-15 21:19:34.958891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.890 [2024-07-15 21:19:34.958898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.890 [2024-07-15 21:19:34.967415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.890 [2024-07-15 21:19:34.967436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.890 [2024-07-15 21:19:34.967442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.890 [2024-07-15 21:19:34.975747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.890 [2024-07-15 21:19:34.975764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.890 [2024-07-15 21:19:34.975770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.890 [2024-07-15 21:19:34.984933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.890 [2024-07-15 21:19:34.984950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.890 [2024-07-15 21:19:34.984956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.890 [2024-07-15 21:19:34.993065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.890 [2024-07-15 21:19:34.993083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.890 [2024-07-15 21:19:34.993089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.890 [2024-07-15 21:19:35.002528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.890 [2024-07-15 21:19:35.002546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.890 [2024-07-15 21:19:35.002552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.890 [2024-07-15 21:19:35.012155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.890 [2024-07-15 21:19:35.012173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.890 [2024-07-15 21:19:35.012179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.890 [2024-07-15 21:19:35.020889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.890 [2024-07-15 21:19:35.020908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.891 [2024-07-15 21:19:35.020914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.891 [2024-07-15 21:19:35.028682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.891 [2024-07-15 21:19:35.028699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.891 [2024-07-15 21:19:35.028706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.891 [2024-07-15 21:19:35.036661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.891 [2024-07-15 21:19:35.036679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.891 [2024-07-15 21:19:35.036686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.891 [2024-07-15 21:19:35.043961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.891 [2024-07-15 21:19:35.043979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.891 [2024-07-15 21:19:35.043985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.891 [2024-07-15 21:19:35.052034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.891 [2024-07-15 21:19:35.052052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.891 [2024-07-15 21:19:35.052058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.891 [2024-07-15 21:19:35.060026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.891 [2024-07-15 21:19:35.060043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.891 [2024-07-15 21:19:35.060050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.891 [2024-07-15 21:19:35.070126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.891 [2024-07-15 21:19:35.070144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.891 [2024-07-15 21:19:35.070151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.891 [2024-07-15 21:19:35.077797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.891 [2024-07-15 21:19:35.077815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.891 [2024-07-15 21:19:35.077821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.891 [2024-07-15 21:19:35.084533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.891 [2024-07-15 21:19:35.084551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.891 [2024-07-15 21:19:35.084557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.891 [2024-07-15 21:19:35.093417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.891 [2024-07-15 21:19:35.093435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.891 [2024-07-15 21:19:35.093441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.891 [2024-07-15 21:19:35.102432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.891 [2024-07-15 21:19:35.102449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.891 [2024-07-15 21:19:35.102455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.891 [2024-07-15 21:19:35.111720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.891 [2024-07-15 21:19:35.111738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.891 [2024-07-15 21:19:35.111748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.891 [2024-07-15 21:19:35.121861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.891 [2024-07-15 21:19:35.121879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.891 [2024-07-15 21:19:35.121885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.891 [2024-07-15 21:19:35.131196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.891 [2024-07-15 21:19:35.131213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.891 [2024-07-15 21:19:35.131219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.891 [2024-07-15 21:19:35.140077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.891 [2024-07-15 21:19:35.140095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.891 [2024-07-15 21:19:35.140101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.891 [2024-07-15 21:19:35.150028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.891 [2024-07-15 21:19:35.150044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.891 [2024-07-15 21:19:35.150051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.891 [2024-07-15 21:19:35.158274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.891 [2024-07-15 21:19:35.158292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.891 [2024-07-15 21:19:35.158298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.891 [2024-07-15 21:19:35.167198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.891 [2024-07-15 21:19:35.167215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.891 [2024-07-15 21:19:35.167221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.891 [2024-07-15 21:19:35.175650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:07.891 [2024-07-15 21:19:35.175667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.891 [2024-07-15 21:19:35.175672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.184101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.184118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.184124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.193124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.193144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.193150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.201017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.201034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.201040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.209207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.209224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.209235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.215936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.215953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.215959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.223977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.223993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.223999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.231074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.231091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.231097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.241485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.241502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.241508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.252010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.252027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.252033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.261049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.261066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.261075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.270715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.270732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.270738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.279868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.279884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.279891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.288207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.288223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.288234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.294418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.294434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.294440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.300467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.300484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.300491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.306489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.306506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.306512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.314821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.314839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.314845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.323268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.323285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.323291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.331244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.331267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.331273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.339429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.339446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.339452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.348312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.348330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.348336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.356185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.356207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.356215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.365039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.365059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.365065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.373945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.373963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.373970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.381936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.381954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.381961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.390661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.390678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.153 [2024-07-15 21:19:35.390685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.153 [2024-07-15 21:19:35.399439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.153 [2024-07-15 21:19:35.399457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.154 [2024-07-15 21:19:35.399463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.154 [2024-07-15 21:19:35.408623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.154 [2024-07-15 21:19:35.408641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.154 [2024-07-15 21:19:35.408648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.154 [2024-07-15 21:19:35.416550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.154 [2024-07-15 21:19:35.416569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.154 [2024-07-15 21:19:35.416575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.154 [2024-07-15 21:19:35.425190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.154 [2024-07-15 21:19:35.425209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.154 [2024-07-15 21:19:35.425215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.154 [2024-07-15 21:19:35.433079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.154 [2024-07-15 21:19:35.433096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.154 [2024-07-15 21:19:35.433103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.154 [2024-07-15 21:19:35.441784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.154 [2024-07-15 21:19:35.441802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.154 [2024-07-15 21:19:35.441808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.415 [2024-07-15 21:19:35.448989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.415 [2024-07-15 21:19:35.449007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.415 [2024-07-15 21:19:35.449013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.415 [2024-07-15 21:19:35.455954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.415 [2024-07-15 21:19:35.455972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.415 [2024-07-15 21:19:35.455979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.415 [2024-07-15 21:19:35.466400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.415 [2024-07-15 21:19:35.466418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.415 [2024-07-15 21:19:35.466424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.415 [2024-07-15 21:19:35.474143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.415 [2024-07-15 21:19:35.474162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.415 [2024-07-15 21:19:35.474171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.415 [2024-07-15 21:19:35.480816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.415 [2024-07-15 21:19:35.480834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.415 [2024-07-15 21:19:35.480840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.415 [2024-07-15 21:19:35.487272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.415 [2024-07-15 21:19:35.487297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.415 [2024-07-15 21:19:35.487303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.415 [2024-07-15 21:19:35.494501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.415 [2024-07-15 21:19:35.494519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.415 [2024-07-15 21:19:35.494525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.415 [2024-07-15 21:19:35.501812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.415 [2024-07-15 21:19:35.501830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.415 [2024-07-15 21:19:35.501836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.415 [2024-07-15 21:19:35.510241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.415 [2024-07-15 21:19:35.510259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.415 [2024-07-15 21:19:35.510265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.519242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.519260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.519266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.527960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.527978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.527984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.535773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.535790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.535796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.544128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.544150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.544156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.552844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.552862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.552868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.559031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.559049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.559055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.565619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.565637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.565643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.572155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.572173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.572179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.579522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.579541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.579547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.587125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.587144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.587150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.596329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.596347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.596353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.604766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.604783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.604790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.613821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.613840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.613845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.621843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.621862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.621868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.628790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.628808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.628814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.637052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.637070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.637075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.644147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.644164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.644170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.652027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.652045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.652052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.658162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.658180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.658186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.666988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.667006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.667012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.674529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.674547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.674557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.684733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.684751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.684757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.691903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.691921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.691927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.416 [2024-07-15 21:19:35.702342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.416 [2024-07-15 21:19:35.702361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.416 [2024-07-15 21:19:35.702367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.709175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.709193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.709199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.717150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.717168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.717175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.724552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.724568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.724574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.731374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.731392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.731399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.738947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.738966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.738972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.746278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.746297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.746304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.752971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.752989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.752996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.760516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.760534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.760540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.770005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.770023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.770030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.778322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.778339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.778345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.788132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.788149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.788155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.797059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.797076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.797082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.804203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.804221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.804227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.811984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.812002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.812010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.821900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.821918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.821924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.829267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.829284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.829290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.838222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.838244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.838250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.848720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.848738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.848745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.857159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.857177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.857183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.863367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.863385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.863391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.871851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.871868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.871874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.878632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.678 [2024-07-15 21:19:35.878650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.678 [2024-07-15 21:19:35.878656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.678 [2024-07-15 21:19:35.885779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.679 [2024-07-15 21:19:35.885800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.679 [2024-07-15 21:19:35.885806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.679 [2024-07-15 21:19:35.892979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.679 [2024-07-15 21:19:35.892997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.679 [2024-07-15 21:19:35.893003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.679 [2024-07-15 21:19:35.900305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.679 [2024-07-15 21:19:35.900323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.679 [2024-07-15 21:19:35.900329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.679 [2024-07-15 21:19:35.907663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.679 [2024-07-15 21:19:35.907680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.679 [2024-07-15 21:19:35.907686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.679 [2024-07-15 21:19:35.915607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.679 [2024-07-15 21:19:35.915625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.679 [2024-07-15 21:19:35.915631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.679 [2024-07-15 21:19:35.923082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.679 [2024-07-15 21:19:35.923100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.679 [2024-07-15 21:19:35.923106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.679 [2024-07-15 21:19:35.933179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.679 [2024-07-15 21:19:35.933198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.679 [2024-07-15 21:19:35.933204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.679 [2024-07-15 21:19:35.940142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.679 [2024-07-15 21:19:35.940160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.679 [2024-07-15 21:19:35.940166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.679 [2024-07-15 21:19:35.947598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.679 [2024-07-15 21:19:35.947615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.679 [2024-07-15 21:19:35.947621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.679 [2024-07-15 21:19:35.953938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.679 [2024-07-15 21:19:35.953956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.679 [2024-07-15 21:19:35.953961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.679 [2024-07-15 21:19:35.962826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.679 [2024-07-15 21:19:35.962844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.679 [2024-07-15 21:19:35.962850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.940 [2024-07-15 21:19:35.972803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.940 [2024-07-15 21:19:35.972822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.940 [2024-07-15 21:19:35.972828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.940 [2024-07-15 21:19:35.979786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.940 [2024-07-15 21:19:35.979804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.940 [2024-07-15 21:19:35.979810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.940 [2024-07-15 21:19:35.988391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.940 [2024-07-15 21:19:35.988409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.940 [2024-07-15 21:19:35.988415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.940 [2024-07-15 21:19:35.998136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.940 [2024-07-15 21:19:35.998154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.940 [2024-07-15 21:19:35.998161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.940 [2024-07-15 21:19:36.005477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.940 [2024-07-15 21:19:36.005496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.940 [2024-07-15 21:19:36.005502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.940 [2024-07-15 21:19:36.012818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.940 [2024-07-15 21:19:36.012836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.012843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.019499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.019515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.019525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.028491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.028509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.028515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.039424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.039441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.039447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.048410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.048428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.048434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.056730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.056748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.056754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.065013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.065031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.065037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.074139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.074157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.074163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.083667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.083685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.083691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.091100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.091118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.091124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.098501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.098522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.098528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.105765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.105782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.105788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.113247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.113264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.113271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.122701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.122719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.122725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.130644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.130662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.130668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.137584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.137603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.137608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.147908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.147926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.147932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.156506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.156524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.156530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.163787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.163805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.163811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.173539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.173557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.173563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.181896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.181914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.181920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.188525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.188543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.188550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.194881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.194898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.194905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.201201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.201219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.201226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.208048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.208066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.208073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.214176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.214194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.214200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.221828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.221846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.221852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.941 [2024-07-15 21:19:36.229068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:08.941 [2024-07-15 21:19:36.229085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.941 [2024-07-15 21:19:36.229094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.236538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.236556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.236562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.244898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.244916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.244922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.252806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.252824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.252830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.260397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.260414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.260420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.267456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.267473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.267480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.275895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.275913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.275919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.284919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.284936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.284943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.294419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.294436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.294442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.303850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.303867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.303873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.310295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.310313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.310319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.317376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.317394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.317400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.324179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.324196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.324202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.330896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.330914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.330920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.339152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.339170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.339176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.347630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.347648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.347654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.356131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.356148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.356155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.365342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.365359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.365371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.374686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.374703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.374710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.383333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.383351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.383357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.390962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.390981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.390987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.400455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.400473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.400479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.407785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.407803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.407809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.417422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.417440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.417446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.425104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.425122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.425128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.433457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.433475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.433481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.440836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.203 [2024-07-15 21:19:36.440856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.203 [2024-07-15 21:19:36.440863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.203 [2024-07-15 21:19:36.448473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.204 [2024-07-15 21:19:36.448490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.204 [2024-07-15 21:19:36.448497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.204 [2024-07-15 21:19:36.456021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.204 [2024-07-15 21:19:36.456039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.204 [2024-07-15 21:19:36.456045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.204 [2024-07-15 21:19:36.464197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.204 [2024-07-15 21:19:36.464215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.204 [2024-07-15 21:19:36.464222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.204 [2024-07-15 21:19:36.472834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.204 [2024-07-15 21:19:36.472851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.204 [2024-07-15 21:19:36.472857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.204 [2024-07-15 21:19:36.481968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.204 [2024-07-15 21:19:36.481986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.204 [2024-07-15 21:19:36.481992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.204 [2024-07-15 21:19:36.491521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.204 [2024-07-15 21:19:36.491539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.204 [2024-07-15 21:19:36.491545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.501191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.501209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.501215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.510933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.510951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.510957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.518214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.518237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.518243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.525572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.525589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.525595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.532017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.532035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.532041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.538312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.538329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.538335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.545586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.545603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.545609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.554112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.554130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.554136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.561928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.561946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.561952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.570437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.570455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.570461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.579639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.579657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.579666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.588630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.588648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.588654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.595676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.595693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.595700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.602621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.602639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.602645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.614292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.614309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.614315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.623475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.623494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.623500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.632717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.632734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.632741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.641669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.641687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.641693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.650580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.650598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.650604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.659648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.659670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.659675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.666679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.666697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.666703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.673926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.673943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.673949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.682346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.682364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.682369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.690212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.690228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.690241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.701311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.466 [2024-07-15 21:19:36.701329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.466 [2024-07-15 21:19:36.701336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.466 [2024-07-15 21:19:36.710226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.467 [2024-07-15 21:19:36.710249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.467 [2024-07-15 21:19:36.710255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.467 [2024-07-15 21:19:36.718860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.467 [2024-07-15 21:19:36.718878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.467 [2024-07-15 21:19:36.718884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.467 [2024-07-15 21:19:36.728030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.467 [2024-07-15 21:19:36.728048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.467 [2024-07-15 21:19:36.728054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.467 [2024-07-15 21:19:36.736223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.467 [2024-07-15 21:19:36.736245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.467 [2024-07-15 21:19:36.736251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.467 [2024-07-15 21:19:36.745531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.467 [2024-07-15 21:19:36.745549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.467 [2024-07-15 21:19:36.745555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.467 [2024-07-15 21:19:36.753322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.467 [2024-07-15 21:19:36.753340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.467 [2024-07-15 21:19:36.753347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.729 [2024-07-15 21:19:36.761019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.729 [2024-07-15 21:19:36.761039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.729 [2024-07-15 21:19:36.761045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.729 [2024-07-15 21:19:36.769992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.729 [2024-07-15 21:19:36.770010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.729 [2024-07-15 21:19:36.770016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.729 [2024-07-15 21:19:36.779063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.729 [2024-07-15 21:19:36.779081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.729 [2024-07-15 21:19:36.779087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.729 [2024-07-15 21:19:36.787655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.729 [2024-07-15 21:19:36.787673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.729 [2024-07-15 21:19:36.787679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.729 [2024-07-15 21:19:36.794324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.729 [2024-07-15 21:19:36.794342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.729 [2024-07-15 21:19:36.794348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.729 [2024-07-15 21:19:36.801965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.729 [2024-07-15 21:19:36.801983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.729 [2024-07-15 21:19:36.801993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.729 [2024-07-15 21:19:36.811725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.729 [2024-07-15 21:19:36.811743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.729 [2024-07-15 21:19:36.811750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.729 [2024-07-15 21:19:36.822943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.729 [2024-07-15 21:19:36.822961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.729 [2024-07-15 21:19:36.822968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.729 [2024-07-15 21:19:36.831401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc4430) 00:29:09.729 [2024-07-15 21:19:36.831419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.729 [2024-07-15 21:19:36.831425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.729 00:29:09.729 Latency(us) 00:29:09.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.729 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:09.729 nvme0n1 : 2.00 3749.49 468.69 0.00 0.00 4263.46 1037.65 14417.92 00:29:09.729 =================================================================================================================== 00:29:09.729 Total : 3749.49 468.69 0.00 0.00 4263.46 1037.65 14417.92 00:29:09.729 0 00:29:09.729 21:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:09.729 21:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:09.729 21:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:09.729 21:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:09.729 | .driver_specific 00:29:09.729 | .nvme_error 00:29:09.729 | .status_code 00:29:09.729 | .command_transient_transport_error' 00:29:09.729 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 242 > 0 )) 00:29:09.729 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2150176 00:29:09.729 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2150176 ']' 00:29:09.729 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2150176 00:29:09.990 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:09.990 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:09.990 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2150176 00:29:09.991 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:09.991 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:09.991 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2150176' 00:29:09.991 killing process with pid 2150176 00:29:09.991 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2150176 00:29:09.991 Received shutdown signal, test time was about 2.000000 seconds 00:29:09.991 00:29:09.991 Latency(us) 00:29:09.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.991 =================================================================================================================== 00:29:09.991 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:09.991 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2150176 00:29:09.991 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:09.991 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:09.991 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:09.991 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:09.991 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:09.991 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2150923 00:29:09.991 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2150923 /var/tmp/bperf.sock 00:29:09.991 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2150923 ']' 00:29:09.991 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:09.991 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:09.991 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:09.991 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:09.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:09.991 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:09.991 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.991 [2024-07-15 21:19:37.234275] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:29:09.991 [2024-07-15 21:19:37.234334] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2150923 ] 00:29:09.991 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.251 [2024-07-15 21:19:37.313555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.251 [2024-07-15 21:19:37.367038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.820 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:10.820 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:10.821 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:10.821 21:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:11.081 21:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:11.081 21:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.081 21:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.081 21:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.081 21:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.081 21:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.341 nvme0n1 00:29:11.341 21:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:11.341 21:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.341 21:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.341 21:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.341 21:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:11.341 21:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:11.341 Running I/O for 2 seconds... 00:29:11.341 [2024-07-15 21:19:38.521180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190f6cc8 00:29:11.341 [2024-07-15 21:19:38.522235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.341 [2024-07-15 21:19:38.522264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:11.341 [2024-07-15 21:19:38.534123] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190f20d8 00:29:11.341 [2024-07-15 21:19:38.535383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.341 [2024-07-15 21:19:38.535402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:11.341 [2024-07-15 21:19:38.547731] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190f0ff8 00:29:11.341 [2024-07-15 21:19:38.549659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.341 [2024-07-15 21:19:38.549675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:11.341 [2024-07-15 21:19:38.558406] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190f57b0 00:29:11.341 [2024-07-15 21:19:38.559849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.341 [2024-07-15 21:19:38.559866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:11.341 [2024-07-15 21:19:38.571508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190fac10 00:29:11.341 [2024-07-15 21:19:38.573427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.341 [2024-07-15 21:19:38.573442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:11.341 [2024-07-15 21:19:38.581031] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190e23b8 00:29:11.341 [2024-07-15 21:19:38.582303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.341 [2024-07-15 21:19:38.582319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:11.341 [2024-07-15 21:19:38.593928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.341 [2024-07-15 21:19:38.595385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.341 [2024-07-15 21:19:38.595401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.341 [2024-07-15 21:19:38.605740] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.341 [2024-07-15 21:19:38.607184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.341 [2024-07-15 21:19:38.607200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.341 [2024-07-15 21:19:38.617531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.342 [2024-07-15 21:19:38.618974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.342 [2024-07-15 21:19:38.618989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.342 [2024-07-15 21:19:38.629313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.342 [2024-07-15 21:19:38.630756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.342 [2024-07-15 21:19:38.630772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.641062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.642513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.642528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.652860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.654282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.654297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.664612] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.666050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.666065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.676383] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.677828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.677845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.688122] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.689576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.689592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.699891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.701321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.701336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.711644] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.713083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.713098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.723419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.724854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.724870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.735157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.736564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.736579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.746922] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.748367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.748384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.758674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.760111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.760127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.770446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.771889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.771904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.782190] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.783641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.783657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.793957] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.795387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.795405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.805702] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.807144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.807159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.817493] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.818930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.818946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.829256] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.830701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.830717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.841034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.842449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.842465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.852783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.854232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.854248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.864567] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.866010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.866026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.876307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.877742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.877757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.603 [2024-07-15 21:19:38.888050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.603 [2024-07-15 21:19:38.889509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.603 [2024-07-15 21:19:38.889524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.864 [2024-07-15 21:19:38.899803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.864 [2024-07-15 21:19:38.901257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.864 [2024-07-15 21:19:38.901272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.864 [2024-07-15 21:19:38.911570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.864 [2024-07-15 21:19:38.913023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:38.913039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.865 [2024-07-15 21:19:38.923435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.865 [2024-07-15 21:19:38.924882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:38.924898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.865 [2024-07-15 21:19:38.935215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.865 [2024-07-15 21:19:38.936636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:38.936651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.865 [2024-07-15 21:19:38.946954] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.865 [2024-07-15 21:19:38.948409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:38.948425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.865 [2024-07-15 21:19:38.958717] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.865 [2024-07-15 21:19:38.960158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:38.960174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.865 [2024-07-15 21:19:38.970482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.865 [2024-07-15 21:19:38.971885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:38.971900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.865 [2024-07-15 21:19:38.982253] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.865 [2024-07-15 21:19:38.983687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:38.983702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.865 [2024-07-15 21:19:38.993984] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.865 [2024-07-15 21:19:38.995415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:38.995430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.865 [2024-07-15 21:19:39.005735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.865 [2024-07-15 21:19:39.007174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:39.007190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.865 [2024-07-15 21:19:39.017480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.865 [2024-07-15 21:19:39.018919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:39.018935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.865 [2024-07-15 21:19:39.029237] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.865 [2024-07-15 21:19:39.030693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:39.030709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.865 [2024-07-15 21:19:39.040991] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.865 [2024-07-15 21:19:39.042440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:39.042456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.865 [2024-07-15 21:19:39.052766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.865 [2024-07-15 21:19:39.054209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:39.054224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.865 [2024-07-15 21:19:39.064510] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.865 [2024-07-15 21:19:39.065950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:39.065965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.865 [2024-07-15 21:19:39.076267] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.865 [2024-07-15 21:19:39.077705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:39.077721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.865 [2024-07-15 21:19:39.088088] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.865 [2024-07-15 21:19:39.089539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:39.089555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.865 [2024-07-15 21:19:39.099862] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.865 [2024-07-15 21:19:39.101295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:39.101314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.865 [2024-07-15 21:19:39.111612] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.865 [2024-07-15 21:19:39.113045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:39.113061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.865 [2024-07-15 21:19:39.123378] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.865 [2024-07-15 21:19:39.124811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:39.124827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.865 [2024-07-15 21:19:39.135145] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.865 [2024-07-15 21:19:39.136552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:39.136567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.865 [2024-07-15 21:19:39.146907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:11.865 [2024-07-15 21:19:39.148312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.865 [2024-07-15 21:19:39.148327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.126 [2024-07-15 21:19:39.158888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.126 [2024-07-15 21:19:39.160328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.126 [2024-07-15 21:19:39.160344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.126 [2024-07-15 21:19:39.170653] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.126 [2024-07-15 21:19:39.172093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.126 [2024-07-15 21:19:39.172109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.126 [2024-07-15 21:19:39.182398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.126 [2024-07-15 21:19:39.183835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.126 [2024-07-15 21:19:39.183851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.126 [2024-07-15 21:19:39.194155] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.126 [2024-07-15 21:19:39.195594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.126 [2024-07-15 21:19:39.195610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.126 [2024-07-15 21:19:39.205877] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.126 [2024-07-15 21:19:39.207316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.126 [2024-07-15 21:19:39.207332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.126 [2024-07-15 21:19:39.217665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.126 [2024-07-15 21:19:39.219104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.126 [2024-07-15 21:19:39.219119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.126 [2024-07-15 21:19:39.229412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.126 [2024-07-15 21:19:39.230859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.126 [2024-07-15 21:19:39.230875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.126 [2024-07-15 21:19:39.241182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.126 [2024-07-15 21:19:39.242626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.126 [2024-07-15 21:19:39.242642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.126 [2024-07-15 21:19:39.252929] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.126 [2024-07-15 21:19:39.254342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.126 [2024-07-15 21:19:39.254358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.126 [2024-07-15 21:19:39.264686] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.126 [2024-07-15 21:19:39.266120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.126 [2024-07-15 21:19:39.266136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.126 [2024-07-15 21:19:39.276438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.126 [2024-07-15 21:19:39.277875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.126 [2024-07-15 21:19:39.277891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.126 [2024-07-15 21:19:39.288195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.126 [2024-07-15 21:19:39.289620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.126 [2024-07-15 21:19:39.289636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.126 [2024-07-15 21:19:39.299953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.126 [2024-07-15 21:19:39.301398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.126 [2024-07-15 21:19:39.301413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.126 [2024-07-15 21:19:39.311699] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.126 [2024-07-15 21:19:39.313138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.126 [2024-07-15 21:19:39.313153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.126 [2024-07-15 21:19:39.323427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.126 [2024-07-15 21:19:39.324861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.126 [2024-07-15 21:19:39.324877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.126 [2024-07-15 21:19:39.335169] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.126 [2024-07-15 21:19:39.336615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.126 [2024-07-15 21:19:39.336630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.126 [2024-07-15 21:19:39.346914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.127 [2024-07-15 21:19:39.348317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.127 [2024-07-15 21:19:39.348334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.127 [2024-07-15 21:19:39.358688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.127 [2024-07-15 21:19:39.360125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.127 [2024-07-15 21:19:39.360140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.127 [2024-07-15 21:19:39.370427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.127 [2024-07-15 21:19:39.371863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.127 [2024-07-15 21:19:39.371879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.127 [2024-07-15 21:19:39.382159] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.127 [2024-07-15 21:19:39.383597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.127 [2024-07-15 21:19:39.383613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.127 [2024-07-15 21:19:39.393894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.127 [2024-07-15 21:19:39.395301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.127 [2024-07-15 21:19:39.395316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.127 [2024-07-15 21:19:39.405656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.127 [2024-07-15 21:19:39.407053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.127 [2024-07-15 21:19:39.407072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.388 [2024-07-15 21:19:39.417419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.388 [2024-07-15 21:19:39.418859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.388 [2024-07-15 21:19:39.418875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.388 [2024-07-15 21:19:39.429164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.388 [2024-07-15 21:19:39.430606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.388 [2024-07-15 21:19:39.430622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.440893] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.442351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.442366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.452638] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.454073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.454088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.464364] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.465803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.465817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.476104] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.477500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.477515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.487838] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.489275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.489290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.499573] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.501014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.501029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.511291] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.512688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.512706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.523029] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.524432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.524447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.534761] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.536200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.536215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.546507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.547952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.547967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.558268] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.559702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.559717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.570001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.571433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.571448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.581736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.583180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.583195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.593489] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.594928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.594943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.605214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.606653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.606668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.616956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.618403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.618417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.628700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.630140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.630154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.640431] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.641831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.641846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.652151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.653593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.653609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.663900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.665341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.665357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.389 [2024-07-15 21:19:39.675632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.389 [2024-07-15 21:19:39.677074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.389 [2024-07-15 21:19:39.677089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.650 [2024-07-15 21:19:39.687406] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.650 [2024-07-15 21:19:39.688838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.650 [2024-07-15 21:19:39.688854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.650 [2024-07-15 21:19:39.699117] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.650 [2024-07-15 21:19:39.700560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.650 [2024-07-15 21:19:39.700575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.650 [2024-07-15 21:19:39.710845] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.650 [2024-07-15 21:19:39.712248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.650 [2024-07-15 21:19:39.712263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.650 [2024-07-15 21:19:39.722584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.650 [2024-07-15 21:19:39.724024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.650 [2024-07-15 21:19:39.724039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.650 [2024-07-15 21:19:39.734417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.650 [2024-07-15 21:19:39.735853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.650 [2024-07-15 21:19:39.735869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.650 [2024-07-15 21:19:39.746139] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.650 [2024-07-15 21:19:39.747579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.650 [2024-07-15 21:19:39.747595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.650 [2024-07-15 21:19:39.757864] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.650 [2024-07-15 21:19:39.759296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.650 [2024-07-15 21:19:39.759312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.650 [2024-07-15 21:19:39.769595] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.650 [2024-07-15 21:19:39.770991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.650 [2024-07-15 21:19:39.771006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.650 [2024-07-15 21:19:39.781354] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.650 [2024-07-15 21:19:39.782799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.650 [2024-07-15 21:19:39.782814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.650 [2024-07-15 21:19:39.793132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.650 [2024-07-15 21:19:39.794576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.650 [2024-07-15 21:19:39.794591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.650 [2024-07-15 21:19:39.804877] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.650 [2024-07-15 21:19:39.806313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.651 [2024-07-15 21:19:39.806329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.651 [2024-07-15 21:19:39.816622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.651 [2024-07-15 21:19:39.818062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.651 [2024-07-15 21:19:39.818080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.651 [2024-07-15 21:19:39.828351] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.651 [2024-07-15 21:19:39.829788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.651 [2024-07-15 21:19:39.829804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.651 [2024-07-15 21:19:39.840070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.651 [2024-07-15 21:19:39.841517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.651 [2024-07-15 21:19:39.841532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.651 [2024-07-15 21:19:39.851815] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.651 [2024-07-15 21:19:39.853252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.651 [2024-07-15 21:19:39.853267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.651 [2024-07-15 21:19:39.863554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.651 [2024-07-15 21:19:39.864993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.651 [2024-07-15 21:19:39.865008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.651 [2024-07-15 21:19:39.875290] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.651 [2024-07-15 21:19:39.876686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.651 [2024-07-15 21:19:39.876701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.651 [2024-07-15 21:19:39.887010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.651 [2024-07-15 21:19:39.888473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.651 [2024-07-15 21:19:39.888488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.651 [2024-07-15 21:19:39.898764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.651 [2024-07-15 21:19:39.900220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.651 [2024-07-15 21:19:39.900237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.651 [2024-07-15 21:19:39.910495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.651 [2024-07-15 21:19:39.911936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.651 [2024-07-15 21:19:39.911951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.651 [2024-07-15 21:19:39.922332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.651 [2024-07-15 21:19:39.923764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.651 [2024-07-15 21:19:39.923779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.651 [2024-07-15 21:19:39.934057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.651 [2024-07-15 21:19:39.935469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.651 [2024-07-15 21:19:39.935484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.912 [2024-07-15 21:19:39.945799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.912 [2024-07-15 21:19:39.947241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.912 [2024-07-15 21:19:39.947256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.912 [2024-07-15 21:19:39.957512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.912 [2024-07-15 21:19:39.958946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.912 [2024-07-15 21:19:39.958961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.912 [2024-07-15 21:19:39.969241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.912 [2024-07-15 21:19:39.970674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.912 [2024-07-15 21:19:39.970689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.912 [2024-07-15 21:19:39.980962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.912 [2024-07-15 21:19:39.982407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.912 [2024-07-15 21:19:39.982422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.912 [2024-07-15 21:19:39.992707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.912 [2024-07-15 21:19:39.994116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.912 [2024-07-15 21:19:39.994131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.912 [2024-07-15 21:19:40.004982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.912 [2024-07-15 21:19:40.006429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.912 [2024-07-15 21:19:40.006446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.912 [2024-07-15 21:19:40.016977] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.912 [2024-07-15 21:19:40.018497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.912 [2024-07-15 21:19:40.018512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.912 [2024-07-15 21:19:40.028823] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.912 [2024-07-15 21:19:40.030262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.912 [2024-07-15 21:19:40.030278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.912 [2024-07-15 21:19:40.040593] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.912 [2024-07-15 21:19:40.042045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.912 [2024-07-15 21:19:40.042062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.912 [2024-07-15 21:19:40.052430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.912 [2024-07-15 21:19:40.053871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.912 [2024-07-15 21:19:40.053887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.912 [2024-07-15 21:19:40.064177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed0b0 00:29:12.912 [2024-07-15 21:19:40.065591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.912 [2024-07-15 21:19:40.065606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:12.912 [2024-07-15 21:19:40.076950] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190f5be8 00:29:12.912 [2024-07-15 21:19:40.078390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.912 [2024-07-15 21:19:40.078408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:12.912 [2024-07-15 21:19:40.087589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190e1710 00:29:12.912 [2024-07-15 21:19:40.088529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.912 [2024-07-15 21:19:40.088544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:12.912 [2024-07-15 21:19:40.098750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190f9b30 00:29:12.912 [2024-07-15 21:19:40.099682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.912 [2024-07-15 21:19:40.099697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:12.912 [2024-07-15 21:19:40.111351] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190fac10 00:29:12.912 [2024-07-15 21:19:40.112244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.912 [2024-07-15 21:19:40.112259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.912 [2024-07-15 21:19:40.123144] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190e38d0 00:29:12.912 [2024-07-15 21:19:40.124080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.912 [2024-07-15 21:19:40.124098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.912 [2024-07-15 21:19:40.134919] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190fd640 00:29:12.912 [2024-07-15 21:19:40.135855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.912 [2024-07-15 21:19:40.135870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.912 [2024-07-15 21:19:40.146672] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190df550 00:29:12.912 [2024-07-15 21:19:40.147606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.913 [2024-07-15 21:19:40.147621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.913 [2024-07-15 21:19:40.158632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190de470 00:29:12.913 [2024-07-15 21:19:40.159569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.913 [2024-07-15 21:19:40.159585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.913 [2024-07-15 21:19:40.170375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190fc560 00:29:12.913 [2024-07-15 21:19:40.171299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.913 [2024-07-15 21:19:40.171315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.913 [2024-07-15 21:19:40.182140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190fef90 00:29:12.913 [2024-07-15 21:19:40.183075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.913 [2024-07-15 21:19:40.183090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.913 [2024-07-15 21:19:40.193903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190e4de8 00:29:12.913 [2024-07-15 21:19:40.194843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.913 [2024-07-15 21:19:40.194858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.173 [2024-07-15 21:19:40.205662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190e5ec8 00:29:13.173 [2024-07-15 21:19:40.206563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.173 [2024-07-15 21:19:40.206578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.173 [2024-07-15 21:19:40.217433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ee5c8 00:29:13.173 [2024-07-15 21:19:40.218324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.173 [2024-07-15 21:19:40.218339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.173 [2024-07-15 21:19:40.229185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ed4e8 00:29:13.173 [2024-07-15 21:19:40.230110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.173 [2024-07-15 21:19:40.230125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.174 [2024-07-15 21:19:40.240969] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ec408 00:29:13.174 [2024-07-15 21:19:40.241901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.174 [2024-07-15 21:19:40.241917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.174 [2024-07-15 21:19:40.252734] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190eb328 00:29:13.174 [2024-07-15 21:19:40.253669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.174 [2024-07-15 21:19:40.253684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.174 [2024-07-15 21:19:40.264472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190f7100 00:29:13.174 [2024-07-15 21:19:40.265408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.174 [2024-07-15 21:19:40.265423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.174 [2024-07-15 21:19:40.276252] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190f7da8 00:29:13.174 [2024-07-15 21:19:40.277145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.174 [2024-07-15 21:19:40.277160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.174 [2024-07-15 21:19:40.288001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190f8e88 00:29:13.174 [2024-07-15 21:19:40.288904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.174 [2024-07-15 21:19:40.288919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.174 [2024-07-15 21:19:40.299779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190f9f68 00:29:13.174 [2024-07-15 21:19:40.300714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.174 [2024-07-15 21:19:40.300729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.174 [2024-07-15 21:19:40.311541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190fb048 00:29:13.174 [2024-07-15 21:19:40.312434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.174 [2024-07-15 21:19:40.312449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.174 [2024-07-15 21:19:40.323313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190fe2e8 00:29:13.174 [2024-07-15 21:19:40.324247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.174 [2024-07-15 21:19:40.324266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.174 [2024-07-15 21:19:40.335053] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190fd208 00:29:13.174 [2024-07-15 21:19:40.335945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.174 [2024-07-15 21:19:40.335963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.174 [2024-07-15 21:19:40.346817] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190df118 00:29:13.174 [2024-07-15 21:19:40.347757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.174 [2024-07-15 21:19:40.347772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.174 [2024-07-15 21:19:40.358593] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190de038 00:29:13.174 [2024-07-15 21:19:40.359547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.174 [2024-07-15 21:19:40.359562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.174 [2024-07-15 21:19:40.370375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190fc128 00:29:13.174 [2024-07-15 21:19:40.371306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.174 [2024-07-15 21:19:40.371321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.174 [2024-07-15 21:19:40.382158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190ff3c8 00:29:13.174 [2024-07-15 21:19:40.383081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.174 [2024-07-15 21:19:40.383096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.174 [2024-07-15 21:19:40.393928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190e5220 00:29:13.174 [2024-07-15 21:19:40.394853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.174 [2024-07-15 21:19:40.394869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.174 [2024-07-15 21:19:40.404906] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190e7818 00:29:13.174 [2024-07-15 21:19:40.405803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.174 [2024-07-15 21:19:40.405817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:13.174 [2024-07-15 21:19:40.417432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190e7818 00:29:13.174 [2024-07-15 21:19:40.418333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.174 [2024-07-15 21:19:40.418348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:13.174 [2024-07-15 21:19:40.429182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190e7818 00:29:13.174 [2024-07-15 21:19:40.430115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.174 [2024-07-15 21:19:40.430133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:13.174 [2024-07-15 21:19:40.440950] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190e7818 00:29:13.174 [2024-07-15 21:19:40.441886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.174 [2024-07-15 21:19:40.441901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:13.174 [2024-07-15 21:19:40.452711] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190e7818 00:29:13.174 [2024-07-15 21:19:40.453606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.174 [2024-07-15 21:19:40.453621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:13.457 [2024-07-15 21:19:40.464491] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190e7818 00:29:13.457 [2024-07-15 21:19:40.465432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.457 [2024-07-15 21:19:40.465447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:13.457 [2024-07-15 21:19:40.476222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190e7818 00:29:13.457 [2024-07-15 21:19:40.477161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.457 [2024-07-15 21:19:40.477176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:13.457 [2024-07-15 21:19:40.487210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190e5220 00:29:13.457 [2024-07-15 21:19:40.488121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.457 [2024-07-15 21:19:40.488136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:13.457 [2024-07-15 21:19:40.501844] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd41eb0) with pdu=0x2000190e8d30 00:29:13.457 [2024-07-15 21:19:40.503564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.457 [2024-07-15 21:19:40.503579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:13.457 00:29:13.457 Latency(us) 00:29:13.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.457 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:13.457 nvme0n1 : 2.00 21609.57 84.41 0.00 0.00 5915.18 2225.49 15728.64 00:29:13.457 =================================================================================================================== 00:29:13.457 Total : 21609.57 84.41 0.00 0.00 5915.18 2225.49 15728.64 00:29:13.457 0 00:29:13.457 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:13.457 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:13.457 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:13.457 | .driver_specific 00:29:13.457 | .nvme_error 00:29:13.457 | .status_code 00:29:13.457 | .command_transient_transport_error' 00:29:13.457 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:13.457 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 169 > 0 )) 00:29:13.457 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2150923 00:29:13.457 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2150923 ']' 00:29:13.457 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2150923 00:29:13.457 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:13.457 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:13.457 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2150923 00:29:13.717 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:13.717 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:13.717 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2150923' 00:29:13.717 killing process with pid 2150923 00:29:13.717 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2150923 00:29:13.717 Received shutdown signal, test time was about 2.000000 seconds 00:29:13.717 00:29:13.717 Latency(us) 00:29:13.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.717 =================================================================================================================== 00:29:13.717 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:13.717 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2150923 00:29:13.717 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:13.717 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:13.717 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:13.717 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:13.717 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:13.717 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2151678 00:29:13.717 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2151678 /var/tmp/bperf.sock 00:29:13.717 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2151678 ']' 00:29:13.717 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:13.717 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:13.717 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:13.717 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:13.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:13.717 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:13.717 21:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:13.717 [2024-07-15 21:19:40.924430] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:29:13.717 [2024-07-15 21:19:40.924498] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2151678 ] 00:29:13.717 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:13.717 Zero copy mechanism will not be used. 00:29:13.717 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.976 [2024-07-15 21:19:41.007785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.976 [2024-07-15 21:19:41.060975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.547 21:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:14.547 21:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:14.547 21:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:14.547 21:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:14.547 21:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:14.547 21:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.547 21:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:14.547 21:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.547 21:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:14.547 21:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.115 nvme0n1 00:29:15.115 21:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:15.115 21:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.115 21:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:15.115 21:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.115 21:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:15.115 21:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:15.115 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:15.115 Zero copy mechanism will not be used. 00:29:15.116 Running I/O for 2 seconds... 00:29:15.116 [2024-07-15 21:19:42.289580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.116 [2024-07-15 21:19:42.289999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.116 [2024-07-15 21:19:42.290027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.116 [2024-07-15 21:19:42.301911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.116 [2024-07-15 21:19:42.302274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.116 [2024-07-15 21:19:42.302294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.116 [2024-07-15 21:19:42.314089] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.116 [2024-07-15 21:19:42.314517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.116 [2024-07-15 21:19:42.314535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.116 [2024-07-15 21:19:42.324478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.116 [2024-07-15 21:19:42.324812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.116 [2024-07-15 21:19:42.324830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.116 [2024-07-15 21:19:42.334436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.116 [2024-07-15 21:19:42.334791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.116 [2024-07-15 21:19:42.334807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.116 [2024-07-15 21:19:42.344485] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.116 [2024-07-15 21:19:42.344827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.116 [2024-07-15 21:19:42.344844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.116 [2024-07-15 21:19:42.353642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.116 [2024-07-15 21:19:42.353970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.116 [2024-07-15 21:19:42.353987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.116 [2024-07-15 21:19:42.363640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.116 [2024-07-15 21:19:42.363967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.116 [2024-07-15 21:19:42.363984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.116 [2024-07-15 21:19:42.371575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.116 [2024-07-15 21:19:42.371900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.116 [2024-07-15 21:19:42.371917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.116 [2024-07-15 21:19:42.380993] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.116 [2024-07-15 21:19:42.381328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.116 [2024-07-15 21:19:42.381345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.116 [2024-07-15 21:19:42.390275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.116 [2024-07-15 21:19:42.390604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.116 [2024-07-15 21:19:42.390620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.116 [2024-07-15 21:19:42.399610] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.116 [2024-07-15 21:19:42.399748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.116 [2024-07-15 21:19:42.399767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.376 [2024-07-15 21:19:42.408910] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.376 [2024-07-15 21:19:42.409244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.376 [2024-07-15 21:19:42.409261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.376 [2024-07-15 21:19:42.419821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.376 [2024-07-15 21:19:42.420144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.376 [2024-07-15 21:19:42.420160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.376 [2024-07-15 21:19:42.431516] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.376 [2024-07-15 21:19:42.431840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.376 [2024-07-15 21:19:42.431857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.376 [2024-07-15 21:19:42.442215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.376 [2024-07-15 21:19:42.442566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.376 [2024-07-15 21:19:42.442583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.376 [2024-07-15 21:19:42.454280] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.376 [2024-07-15 21:19:42.454613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.376 [2024-07-15 21:19:42.454630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.376 [2024-07-15 21:19:42.466007] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.376 [2024-07-15 21:19:42.466149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.376 [2024-07-15 21:19:42.466164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.376 [2024-07-15 21:19:42.477043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.377 [2024-07-15 21:19:42.477369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.377 [2024-07-15 21:19:42.477386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.377 [2024-07-15 21:19:42.487941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.377 [2024-07-15 21:19:42.488276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.377 [2024-07-15 21:19:42.488292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.377 [2024-07-15 21:19:42.499417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.377 [2024-07-15 21:19:42.499792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.377 [2024-07-15 21:19:42.499808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.377 [2024-07-15 21:19:42.511059] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.377 [2024-07-15 21:19:42.511393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.377 [2024-07-15 21:19:42.511409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.377 [2024-07-15 21:19:42.522581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.377 [2024-07-15 21:19:42.522913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.377 [2024-07-15 21:19:42.522930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.377 [2024-07-15 21:19:42.533098] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.377 [2024-07-15 21:19:42.533239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.377 [2024-07-15 21:19:42.533253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.377 [2024-07-15 21:19:42.544403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.377 [2024-07-15 21:19:42.544731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.377 [2024-07-15 21:19:42.544748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.377 [2024-07-15 21:19:42.556035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.377 [2024-07-15 21:19:42.556386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.377 [2024-07-15 21:19:42.556406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.377 [2024-07-15 21:19:42.567205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.377 [2024-07-15 21:19:42.567458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.377 [2024-07-15 21:19:42.567482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.377 [2024-07-15 21:19:42.577771] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.377 [2024-07-15 21:19:42.578002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.377 [2024-07-15 21:19:42.578018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.377 [2024-07-15 21:19:42.589236] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.377 [2024-07-15 21:19:42.589606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.377 [2024-07-15 21:19:42.589622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.377 [2024-07-15 21:19:42.599556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.377 [2024-07-15 21:19:42.599870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.377 [2024-07-15 21:19:42.599886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.377 [2024-07-15 21:19:42.610480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.377 [2024-07-15 21:19:42.610822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.377 [2024-07-15 21:19:42.610838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.377 [2024-07-15 21:19:42.621445] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.377 [2024-07-15 21:19:42.621776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.377 [2024-07-15 21:19:42.621792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.377 [2024-07-15 21:19:42.633674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.377 [2024-07-15 21:19:42.633785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.377 [2024-07-15 21:19:42.633800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.377 [2024-07-15 21:19:42.644948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.377 [2024-07-15 21:19:42.645058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.377 [2024-07-15 21:19:42.645074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.377 [2024-07-15 21:19:42.657176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.377 [2024-07-15 21:19:42.657412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.377 [2024-07-15 21:19:42.657428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.668225] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.668592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.668608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.678538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.678662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.678677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.689365] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.689757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.689778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.697541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.697876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.697892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.707468] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.707816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.707833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.715664] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.715967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.715984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.725865] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.726206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.726223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.735809] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.736113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.736129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.747920] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.748280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.748297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.759627] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.759917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.759933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.771089] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.771456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.771473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.782244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.782610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.782626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.791114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.791462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.791479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.800196] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.800549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.800565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.810218] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.810638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.810654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.818382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.818698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.818714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.827807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.828114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.828130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.837532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.837860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.837877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.848257] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.848624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.848641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.859449] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.859774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.859790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.869838] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.870066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.870082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.880754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.881083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.881100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.892556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.892982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.892998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.904205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.904542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.904558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.915823] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.916166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.916183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.638 [2024-07-15 21:19:42.926834] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.638 [2024-07-15 21:19:42.927065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.638 [2024-07-15 21:19:42.927080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:42.937695] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:42.938018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:42.938035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:42.949579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:42.950013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:42.950029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:42.961084] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:42.961426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:42.961446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:42.971838] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:42.972170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:42.972186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:42.982131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:42.982237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:42.982252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:42.993884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:42.994272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:42.994289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:43.005143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:43.005496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:43.005513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:43.016862] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:43.017272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:43.017289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:43.027808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:43.028154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:43.028170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:43.038594] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:43.038918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:43.038935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:43.049052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:43.049386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:43.049403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:43.058570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:43.058704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:43.058719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:43.069907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:43.070113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:43.070128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:43.080443] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:43.080810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:43.080827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:43.090801] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:43.091062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:43.091077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:43.101643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:43.101865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:43.101881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:43.110959] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:43.111350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:43.111366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:43.119422] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:43.119644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:43.119659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:43.126793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:43.127095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:43.127111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:43.134218] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:43.134463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:43.134479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:43.141358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:43.141558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:43.141573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:43.148023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:43.148395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:43.148411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:43.155914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:43.156115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:43.156131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:43.163858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:43.164072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:43.164088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:43.171281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:43.171823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-07-15 21:19:43.171840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.899 [2024-07-15 21:19:43.180033] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.899 [2024-07-15 21:19:43.180245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.900 [2024-07-15 21:19:43.180260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.900 [2024-07-15 21:19:43.187518] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:15.900 [2024-07-15 21:19:43.187881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.900 [2024-07-15 21:19:43.187897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.196467] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.196762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.196779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.204116] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.204362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.204380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.213277] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.213481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.213497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.222687] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.223019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.223036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.232555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.232756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.232772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.241166] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.241404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.241419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.251315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.251792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.251809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.260949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.261410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.261428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.270723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.271134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.271151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.281414] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.281806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.281823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.292722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.293027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.293043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.303700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.304087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.304103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.314425] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.315010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.315026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.325367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.325754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.325770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.336995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.337425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.337441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.347218] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.347462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.347478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.357461] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.357669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.357685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.366881] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.367051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.367066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.376956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.377406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.377423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.387285] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.387675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.387691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.395761] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.395997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.396012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.406510] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.406960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.406976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.416981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.417325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.417341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.427256] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.427667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.427683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.438022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.438279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.438300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.159 [2024-07-15 21:19:43.444741] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.159 [2024-07-15 21:19:43.444944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.159 [2024-07-15 21:19:43.444960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.418 [2024-07-15 21:19:43.450311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.418 [2024-07-15 21:19:43.450516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.418 [2024-07-15 21:19:43.450532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.418 [2024-07-15 21:19:43.458126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.418 [2024-07-15 21:19:43.458448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.418 [2024-07-15 21:19:43.458468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.418 [2024-07-15 21:19:43.464973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.418 [2024-07-15 21:19:43.465318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.418 [2024-07-15 21:19:43.465335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.418 [2024-07-15 21:19:43.474143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.418 [2024-07-15 21:19:43.474504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.418 [2024-07-15 21:19:43.474520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.418 [2024-07-15 21:19:43.484756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.418 [2024-07-15 21:19:43.485074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.418 [2024-07-15 21:19:43.485090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.418 [2024-07-15 21:19:43.495159] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.418 [2024-07-15 21:19:43.495486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.418 [2024-07-15 21:19:43.495503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.418 [2024-07-15 21:19:43.506298] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.418 [2024-07-15 21:19:43.506727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.418 [2024-07-15 21:19:43.506743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.418 [2024-07-15 21:19:43.517476] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.418 [2024-07-15 21:19:43.517878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.418 [2024-07-15 21:19:43.517895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.418 [2024-07-15 21:19:43.529549] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.418 [2024-07-15 21:19:43.530004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.418 [2024-07-15 21:19:43.530020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.418 [2024-07-15 21:19:43.541316] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.418 [2024-07-15 21:19:43.541664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.418 [2024-07-15 21:19:43.541681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.418 [2024-07-15 21:19:43.548841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.418 [2024-07-15 21:19:43.549066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.418 [2024-07-15 21:19:43.549081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.418 [2024-07-15 21:19:43.559438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.418 [2024-07-15 21:19:43.559746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.418 [2024-07-15 21:19:43.559762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.418 [2024-07-15 21:19:43.570150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.418 [2024-07-15 21:19:43.570454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.418 [2024-07-15 21:19:43.570469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.418 [2024-07-15 21:19:43.580836] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.418 [2024-07-15 21:19:43.581207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.418 [2024-07-15 21:19:43.581226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.418 [2024-07-15 21:19:43.591753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.419 [2024-07-15 21:19:43.592060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.419 [2024-07-15 21:19:43.592077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.419 [2024-07-15 21:19:43.603224] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.419 [2024-07-15 21:19:43.603640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.419 [2024-07-15 21:19:43.603658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.419 [2024-07-15 21:19:43.615849] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.419 [2024-07-15 21:19:43.616162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.419 [2024-07-15 21:19:43.616179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.419 [2024-07-15 21:19:43.626530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.419 [2024-07-15 21:19:43.626935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.419 [2024-07-15 21:19:43.626952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.419 [2024-07-15 21:19:43.637739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.419 [2024-07-15 21:19:43.637947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.419 [2024-07-15 21:19:43.637962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.419 [2024-07-15 21:19:43.648432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.419 [2024-07-15 21:19:43.648875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.419 [2024-07-15 21:19:43.648892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.419 [2024-07-15 21:19:43.659156] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.419 [2024-07-15 21:19:43.659497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.419 [2024-07-15 21:19:43.659515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.419 [2024-07-15 21:19:43.669496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.419 [2024-07-15 21:19:43.669775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.419 [2024-07-15 21:19:43.669797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.419 [2024-07-15 21:19:43.679500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.419 [2024-07-15 21:19:43.679919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.419 [2024-07-15 21:19:43.679934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.419 [2024-07-15 21:19:43.691350] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.419 [2024-07-15 21:19:43.691660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.419 [2024-07-15 21:19:43.691676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.419 [2024-07-15 21:19:43.701698] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.419 [2024-07-15 21:19:43.702026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.419 [2024-07-15 21:19:43.702043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.679 [2024-07-15 21:19:43.712788] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.679 [2024-07-15 21:19:43.713023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.679 [2024-07-15 21:19:43.713038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.679 [2024-07-15 21:19:43.722407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.679 [2024-07-15 21:19:43.722638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.679 [2024-07-15 21:19:43.722654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.679 [2024-07-15 21:19:43.732564] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.679 [2024-07-15 21:19:43.732911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.679 [2024-07-15 21:19:43.732930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.679 [2024-07-15 21:19:43.740334] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.679 [2024-07-15 21:19:43.740699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.679 [2024-07-15 21:19:43.740715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.679 [2024-07-15 21:19:43.746348] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.679 [2024-07-15 21:19:43.746550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.746565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.752303] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.752657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.752673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.759284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.759482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.759498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.764197] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.764402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.764418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.768887] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.769090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.769106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.775414] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.775611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.775626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.780038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.780242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.780258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.786558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.786754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.786769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.794503] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.794714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.794729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.804052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.804385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.804401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.813901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.814246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.814262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.823095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.823483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.823501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.833170] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.833563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.833582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.843749] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.844122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.844139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.851910] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.852160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.852176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.858893] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.859254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.859273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.867668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.867872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.867887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.874236] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.874591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.874608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.879540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.879910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.879927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.884689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.884890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.884906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.889528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.889726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.889741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.895391] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.895729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.895745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.904066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.904422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.904438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.913085] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.913383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.913401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.918622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.918827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.918843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.925859] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.926057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.926073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.932160] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.932366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.932381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.938913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.939113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.939129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.945071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.945279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.945295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.954778] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.955092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.955108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.680 [2024-07-15 21:19:43.963438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.680 [2024-07-15 21:19:43.963861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.680 [2024-07-15 21:19:43.963879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.941 [2024-07-15 21:19:43.971414] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.941 [2024-07-15 21:19:43.971888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.941 [2024-07-15 21:19:43.971905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.941 [2024-07-15 21:19:43.982297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.941 [2024-07-15 21:19:43.982684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.941 [2024-07-15 21:19:43.982700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.941 [2024-07-15 21:19:43.993512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.941 [2024-07-15 21:19:43.993831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.941 [2024-07-15 21:19:43.993847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.941 [2024-07-15 21:19:44.005547] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.941 [2024-07-15 21:19:44.006031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.941 [2024-07-15 21:19:44.006047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.941 [2024-07-15 21:19:44.018210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.018709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.018726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.029999] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.030184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.030199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.039982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.040372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.040388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.051970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.052332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.052348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.061138] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.061409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.061424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.068994] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.069419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.069436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.079602] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.080046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.080070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.088498] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.088809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.088827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.096678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.096929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.096946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.101262] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.101465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.101480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.106054] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.106257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.106273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.111029] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.111235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.111250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.115669] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.116023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.116039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.120394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.120593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.120608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.124323] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.124522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.124537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.128315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.128515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.128531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.132154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.132354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.132370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.136006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.136200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.136216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.143651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.144074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.144091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.149343] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.149540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.149556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.156434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.156630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.156646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.160689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.160888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.160904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.164974] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.165276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.165293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.171065] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.171326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.171342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.176827] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.177022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.177038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.181484] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.181680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.181696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.187722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.188070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.188086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.194483] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.194677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.194693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.200183] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.200469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.200486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.206177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.206378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.206394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.210938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.211136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.211151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.215813] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.216010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.216026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.221926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.222123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.222142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.942 [2024-07-15 21:19:44.229623] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:16.942 [2024-07-15 21:19:44.229914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-07-15 21:19:44.229931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.202 [2024-07-15 21:19:44.238258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:17.202 [2024-07-15 21:19:44.238577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.202 [2024-07-15 21:19:44.238594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.202 [2024-07-15 21:19:44.247539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:17.202 [2024-07-15 21:19:44.247713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.202 [2024-07-15 21:19:44.247728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.202 [2024-07-15 21:19:44.257874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:17.202 [2024-07-15 21:19:44.258218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.202 [2024-07-15 21:19:44.258239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.202 [2024-07-15 21:19:44.268840] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:17.202 [2024-07-15 21:19:44.269238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.202 [2024-07-15 21:19:44.269254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.202 [2024-07-15 21:19:44.279885] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd42050) with pdu=0x2000190fef90 00:29:17.202 [2024-07-15 21:19:44.280062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.202 [2024-07-15 21:19:44.280078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.202 00:29:17.202 Latency(us) 00:29:17.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.202 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:17.202 nvme0n1 : 2.01 3360.01 420.00 0.00 0.00 4753.26 1815.89 12834.13 00:29:17.202 =================================================================================================================== 00:29:17.202 Total : 3360.01 420.00 0.00 0.00 4753.26 1815.89 12834.13 00:29:17.202 0 00:29:17.202 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:17.202 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:17.202 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:17.202 | .driver_specific 00:29:17.202 | .nvme_error 00:29:17.202 | .status_code 00:29:17.202 | .command_transient_transport_error' 00:29:17.202 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:17.202 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 217 > 0 )) 00:29:17.202 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2151678 00:29:17.203 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2151678 ']' 00:29:17.203 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2151678 00:29:17.203 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:17.203 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:17.203 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2151678 00:29:17.463 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:17.463 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:17.463 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2151678' 00:29:17.463 killing process with pid 2151678 00:29:17.463 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2151678 00:29:17.463 Received shutdown signal, test time was about 2.000000 seconds 00:29:17.463 00:29:17.463 Latency(us) 00:29:17.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.463 =================================================================================================================== 00:29:17.463 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:17.463 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2151678 00:29:17.463 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2149360 00:29:17.463 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2149360 ']' 00:29:17.463 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2149360 00:29:17.463 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:17.463 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:17.463 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2149360 00:29:17.463 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:17.463 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:17.463 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2149360' 00:29:17.463 killing process with pid 2149360 00:29:17.463 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2149360 00:29:17.463 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2149360 00:29:17.723 00:29:17.723 real 0m15.982s 00:29:17.723 user 0m31.488s 00:29:17.723 sys 0m3.239s 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:17.723 ************************************ 00:29:17.723 END TEST nvmf_digest_error 00:29:17.723 ************************************ 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:17.723 rmmod nvme_tcp 00:29:17.723 rmmod nvme_fabrics 00:29:17.723 rmmod nvme_keyring 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2149360 ']' 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2149360 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2149360 ']' 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2149360 00:29:17.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2149360) - No such process 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2149360 is not found' 00:29:17.723 Process with pid 2149360 is not found 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:17.723 21:19:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.266 21:19:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:20.266 00:29:20.266 real 0m43.052s 00:29:20.266 user 1m5.710s 00:29:20.266 sys 0m12.978s 00:29:20.266 21:19:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:20.266 21:19:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:20.266 ************************************ 00:29:20.266 END TEST nvmf_digest 00:29:20.266 ************************************ 00:29:20.266 21:19:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:20.266 21:19:47 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:29:20.266 21:19:47 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:29:20.266 21:19:47 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:29:20.266 21:19:47 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:20.266 21:19:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:20.266 21:19:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:20.266 21:19:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:20.266 ************************************ 00:29:20.266 START TEST nvmf_bdevperf 00:29:20.266 ************************************ 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:20.266 * Looking for test storage... 00:29:20.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.266 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.267 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:20.267 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:20.267 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:20.267 21:19:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:20.267 21:19:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:20.267 21:19:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:20.267 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:20.267 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:20.267 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:20.267 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:20.267 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:20.267 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.267 21:19:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:20.267 21:19:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.267 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:20.267 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:20.267 21:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:20.267 21:19:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:28.468 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:28.468 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:28.468 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:28.469 Found net devices under 0000:31:00.0: cvl_0_0 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:28.469 Found net devices under 0000:31:00.1: cvl_0_1 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:28.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:28.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.718 ms 00:29:28.469 00:29:28.469 --- 10.0.0.2 ping statistics --- 00:29:28.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.469 rtt min/avg/max/mdev = 0.718/0.718/0.718/0.000 ms 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:28.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:28.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:29:28.469 00:29:28.469 --- 10.0.0.1 ping statistics --- 00:29:28.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.469 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2157130 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2157130 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2157130 ']' 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:28.469 21:19:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:28.469 [2024-07-15 21:19:55.431106] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:29:28.469 [2024-07-15 21:19:55.431165] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.469 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.469 [2024-07-15 21:19:55.523176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:28.469 [2024-07-15 21:19:55.617859] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.469 [2024-07-15 21:19:55.617925] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.469 [2024-07-15 21:19:55.617934] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.469 [2024-07-15 21:19:55.617941] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.469 [2024-07-15 21:19:55.617947] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.469 [2024-07-15 21:19:55.618091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:28.469 [2024-07-15 21:19:55.618277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:28.469 [2024-07-15 21:19:55.618331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.042 21:19:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:29.042 21:19:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:29.042 21:19:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:29.042 21:19:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:29.042 21:19:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:29.042 21:19:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:29.042 21:19:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:29.042 21:19:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.042 21:19:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:29.042 [2024-07-15 21:19:56.251452] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:29.042 21:19:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.042 21:19:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:29.042 21:19:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:29.043 Malloc0 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:29.043 [2024-07-15 21:19:56.314708] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.043 { 00:29:29.043 "params": { 00:29:29.043 "name": "Nvme$subsystem", 00:29:29.043 "trtype": "$TEST_TRANSPORT", 00:29:29.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.043 "adrfam": "ipv4", 00:29:29.043 "trsvcid": "$NVMF_PORT", 00:29:29.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.043 "hdgst": ${hdgst:-false}, 00:29:29.043 "ddgst": ${ddgst:-false} 00:29:29.043 }, 00:29:29.043 "method": "bdev_nvme_attach_controller" 00:29:29.043 } 00:29:29.043 EOF 00:29:29.043 )") 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:29.043 21:19:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:29.304 21:19:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:29.304 21:19:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:29.304 "params": { 00:29:29.304 "name": "Nvme1", 00:29:29.304 "trtype": "tcp", 00:29:29.304 "traddr": "10.0.0.2", 00:29:29.304 "adrfam": "ipv4", 00:29:29.304 "trsvcid": "4420", 00:29:29.304 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:29.304 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:29.304 "hdgst": false, 00:29:29.304 "ddgst": false 00:29:29.304 }, 00:29:29.304 "method": "bdev_nvme_attach_controller" 00:29:29.304 }' 00:29:29.305 [2024-07-15 21:19:56.380315] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:29:29.305 [2024-07-15 21:19:56.380366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2157189 ] 00:29:29.305 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.305 [2024-07-15 21:19:56.446547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.305 [2024-07-15 21:19:56.511334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.876 Running I/O for 1 seconds... 00:29:30.820 00:29:30.820 Latency(us) 00:29:30.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.820 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:30.820 Verification LBA range: start 0x0 length 0x4000 00:29:30.820 Nvme1n1 : 1.01 9299.85 36.33 0.00 0.00 13698.85 2894.51 15073.28 00:29:30.820 =================================================================================================================== 00:29:30.820 Total : 9299.85 36.33 0.00 0.00 13698.85 2894.51 15073.28 00:29:30.820 21:19:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2157511 00:29:30.820 21:19:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:30.820 21:19:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:30.820 21:19:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:30.820 21:19:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:30.820 21:19:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:30.820 21:19:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:30.820 { 00:29:30.820 "params": { 00:29:30.820 "name": "Nvme$subsystem", 00:29:30.820 "trtype": "$TEST_TRANSPORT", 00:29:30.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:30.820 "adrfam": "ipv4", 00:29:30.820 "trsvcid": "$NVMF_PORT", 00:29:30.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:30.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:30.820 "hdgst": ${hdgst:-false}, 00:29:30.820 "ddgst": ${ddgst:-false} 00:29:30.820 }, 00:29:30.820 "method": "bdev_nvme_attach_controller" 00:29:30.820 } 00:29:30.820 EOF 00:29:30.820 )") 00:29:30.820 21:19:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:30.820 21:19:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:30.820 21:19:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:30.820 21:19:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:30.820 21:19:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:30.820 "params": { 00:29:30.820 "name": "Nvme1", 00:29:30.820 "trtype": "tcp", 00:29:30.820 "traddr": "10.0.0.2", 00:29:30.820 "adrfam": "ipv4", 00:29:30.820 "trsvcid": "4420", 00:29:30.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:30.820 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:30.820 "hdgst": false, 00:29:30.820 "ddgst": false 00:29:30.820 }, 00:29:30.820 "method": "bdev_nvme_attach_controller" 00:29:30.820 }' 00:29:30.820 [2024-07-15 21:19:58.051679] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:29:30.820 [2024-07-15 21:19:58.051739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2157511 ] 00:29:30.820 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.081 [2024-07-15 21:19:58.116945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.081 [2024-07-15 21:19:58.181510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.081 Running I/O for 15 seconds... 00:29:34.387 21:20:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2157130 00:29:34.387 21:20:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:34.387 [2024-07-15 21:20:01.016137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.387 [2024-07-15 21:20:01.016691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-15 21:20:01.016698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.016708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.016715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.016724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.016731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.016740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.016748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.016757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.016764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.016773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.016780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.016789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.016796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.016805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.016812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.016822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.016828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.016838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.016845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.016854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.016861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.016872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.016880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.016889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.016897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.016906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.016913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.016922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.016929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.016939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.016946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.016955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.016962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.016972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.016979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.016988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.016995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-15 21:20:01.017380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.388 [2024-07-15 21:20:01.017389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.017988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.017995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.018004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.018012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.018021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.018028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.018037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.018044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.018053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.018061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.018070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.389 [2024-07-15 21:20:01.018077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.389 [2024-07-15 21:20:01.018086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.390 [2024-07-15 21:20:01.018427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2396420 is same with the state(5) to be set 00:29:34.390 [2024-07-15 21:20:01.018444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.390 [2024-07-15 21:20:01.018449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.390 [2024-07-15 21:20:01.018456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110744 len:8 PRP1 0x0 PRP2 0x0 00:29:34.390 [2024-07-15 21:20:01.018463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.390 [2024-07-15 21:20:01.018504] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2396420 was disconnected and freed. reset controller. 00:29:34.390 [2024-07-15 21:20:01.022054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.390 [2024-07-15 21:20:01.022101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.390 [2024-07-15 21:20:01.022894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.390 [2024-07-15 21:20:01.022911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.390 [2024-07-15 21:20:01.022919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.390 [2024-07-15 21:20:01.023140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.390 [2024-07-15 21:20:01.023365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.390 [2024-07-15 21:20:01.023373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.390 [2024-07-15 21:20:01.023382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.390 [2024-07-15 21:20:01.026939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.390 [2024-07-15 21:20:01.036153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.390 [2024-07-15 21:20:01.036827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.390 [2024-07-15 21:20:01.036865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.390 [2024-07-15 21:20:01.036876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.390 [2024-07-15 21:20:01.037116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.390 [2024-07-15 21:20:01.037350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.390 [2024-07-15 21:20:01.037359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.390 [2024-07-15 21:20:01.037367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.390 [2024-07-15 21:20:01.040926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.390 [2024-07-15 21:20:01.050146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.390 [2024-07-15 21:20:01.050863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.390 [2024-07-15 21:20:01.050900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.390 [2024-07-15 21:20:01.050912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.390 [2024-07-15 21:20:01.051155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.390 [2024-07-15 21:20:01.051385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.390 [2024-07-15 21:20:01.051395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.390 [2024-07-15 21:20:01.051402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.390 [2024-07-15 21:20:01.054957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.390 [2024-07-15 21:20:01.063970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.390 [2024-07-15 21:20:01.064643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.390 [2024-07-15 21:20:01.064680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.390 [2024-07-15 21:20:01.064691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.390 [2024-07-15 21:20:01.064930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.390 [2024-07-15 21:20:01.065153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.390 [2024-07-15 21:20:01.065161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.390 [2024-07-15 21:20:01.065169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.390 [2024-07-15 21:20:01.068731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.390 [2024-07-15 21:20:01.077938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.390 [2024-07-15 21:20:01.078731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.390 [2024-07-15 21:20:01.078769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.390 [2024-07-15 21:20:01.078784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.391 [2024-07-15 21:20:01.079024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.391 [2024-07-15 21:20:01.079255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.391 [2024-07-15 21:20:01.079264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.391 [2024-07-15 21:20:01.079272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.391 [2024-07-15 21:20:01.082826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.391 [2024-07-15 21:20:01.091824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.391 [2024-07-15 21:20:01.092436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.391 [2024-07-15 21:20:01.092473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.391 [2024-07-15 21:20:01.092484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.391 [2024-07-15 21:20:01.092723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.391 [2024-07-15 21:20:01.092946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.391 [2024-07-15 21:20:01.092954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.391 [2024-07-15 21:20:01.092962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.391 [2024-07-15 21:20:01.096525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.391 [2024-07-15 21:20:01.105820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.391 [2024-07-15 21:20:01.106540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.391 [2024-07-15 21:20:01.106578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.391 [2024-07-15 21:20:01.106588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.391 [2024-07-15 21:20:01.106828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.391 [2024-07-15 21:20:01.107050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.391 [2024-07-15 21:20:01.107058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.391 [2024-07-15 21:20:01.107066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.391 [2024-07-15 21:20:01.110627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.391 [2024-07-15 21:20:01.119634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.391 [2024-07-15 21:20:01.120288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.391 [2024-07-15 21:20:01.120325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.391 [2024-07-15 21:20:01.120337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.391 [2024-07-15 21:20:01.120578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.391 [2024-07-15 21:20:01.120800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.391 [2024-07-15 21:20:01.120813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.391 [2024-07-15 21:20:01.120821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.391 [2024-07-15 21:20:01.124385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.391 [2024-07-15 21:20:01.133597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.391 [2024-07-15 21:20:01.134278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.391 [2024-07-15 21:20:01.134315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.391 [2024-07-15 21:20:01.134325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.391 [2024-07-15 21:20:01.134565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.391 [2024-07-15 21:20:01.134788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.391 [2024-07-15 21:20:01.134796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.391 [2024-07-15 21:20:01.134803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.391 [2024-07-15 21:20:01.138363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.391 [2024-07-15 21:20:01.147574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.391 [2024-07-15 21:20:01.148296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.391 [2024-07-15 21:20:01.148333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.391 [2024-07-15 21:20:01.148345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.391 [2024-07-15 21:20:01.148588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.391 [2024-07-15 21:20:01.148810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.391 [2024-07-15 21:20:01.148819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.391 [2024-07-15 21:20:01.148827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.391 [2024-07-15 21:20:01.152603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.391 [2024-07-15 21:20:01.161421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.391 [2024-07-15 21:20:01.162093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.391 [2024-07-15 21:20:01.162130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.391 [2024-07-15 21:20:01.162141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.391 [2024-07-15 21:20:01.162389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.391 [2024-07-15 21:20:01.162613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.391 [2024-07-15 21:20:01.162621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.391 [2024-07-15 21:20:01.162628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.391 [2024-07-15 21:20:01.166179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.391 [2024-07-15 21:20:01.175395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.391 [2024-07-15 21:20:01.176107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.391 [2024-07-15 21:20:01.176144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.391 [2024-07-15 21:20:01.176154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.391 [2024-07-15 21:20:01.176402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.391 [2024-07-15 21:20:01.176626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.391 [2024-07-15 21:20:01.176634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.391 [2024-07-15 21:20:01.176641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.391 [2024-07-15 21:20:01.180193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.391 [2024-07-15 21:20:01.189193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.391 [2024-07-15 21:20:01.189908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.391 [2024-07-15 21:20:01.189945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.391 [2024-07-15 21:20:01.189957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.391 [2024-07-15 21:20:01.190201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.391 [2024-07-15 21:20:01.190432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.391 [2024-07-15 21:20:01.190441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.391 [2024-07-15 21:20:01.190449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.391 [2024-07-15 21:20:01.194003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.391 [2024-07-15 21:20:01.203010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.391 [2024-07-15 21:20:01.203717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.391 [2024-07-15 21:20:01.203754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.391 [2024-07-15 21:20:01.203765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.392 [2024-07-15 21:20:01.204004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.392 [2024-07-15 21:20:01.204227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.392 [2024-07-15 21:20:01.204243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.392 [2024-07-15 21:20:01.204250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.392 [2024-07-15 21:20:01.207804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.392 [2024-07-15 21:20:01.216817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.392 [2024-07-15 21:20:01.217365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.392 [2024-07-15 21:20:01.217402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.392 [2024-07-15 21:20:01.217414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.392 [2024-07-15 21:20:01.217661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.392 [2024-07-15 21:20:01.217884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.392 [2024-07-15 21:20:01.217893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.392 [2024-07-15 21:20:01.217900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.392 [2024-07-15 21:20:01.221462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.392 [2024-07-15 21:20:01.230671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.392 [2024-07-15 21:20:01.231449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.392 [2024-07-15 21:20:01.231485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.392 [2024-07-15 21:20:01.231496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.392 [2024-07-15 21:20:01.231735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.392 [2024-07-15 21:20:01.231958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.392 [2024-07-15 21:20:01.231967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.392 [2024-07-15 21:20:01.231974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.392 [2024-07-15 21:20:01.235533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.392 [2024-07-15 21:20:01.244535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.392 [2024-07-15 21:20:01.245227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.392 [2024-07-15 21:20:01.245270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.392 [2024-07-15 21:20:01.245282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.392 [2024-07-15 21:20:01.245523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.392 [2024-07-15 21:20:01.245746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.392 [2024-07-15 21:20:01.245755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.392 [2024-07-15 21:20:01.245763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.392 [2024-07-15 21:20:01.249319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.392 [2024-07-15 21:20:01.258538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.392 [2024-07-15 21:20:01.259269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.392 [2024-07-15 21:20:01.259306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.392 [2024-07-15 21:20:01.259318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.392 [2024-07-15 21:20:01.259561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.392 [2024-07-15 21:20:01.259783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.392 [2024-07-15 21:20:01.259791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.392 [2024-07-15 21:20:01.259804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.392 [2024-07-15 21:20:01.263369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.392 [2024-07-15 21:20:01.272376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.392 [2024-07-15 21:20:01.273078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.392 [2024-07-15 21:20:01.273115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.392 [2024-07-15 21:20:01.273126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.392 [2024-07-15 21:20:01.273377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.392 [2024-07-15 21:20:01.273600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.392 [2024-07-15 21:20:01.273609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.392 [2024-07-15 21:20:01.273617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.392 [2024-07-15 21:20:01.277172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.392 [2024-07-15 21:20:01.286187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.392 [2024-07-15 21:20:01.286851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.392 [2024-07-15 21:20:01.286888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.392 [2024-07-15 21:20:01.286898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.392 [2024-07-15 21:20:01.287137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.392 [2024-07-15 21:20:01.287369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.392 [2024-07-15 21:20:01.287379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.392 [2024-07-15 21:20:01.287386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.392 [2024-07-15 21:20:01.290944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.392 [2024-07-15 21:20:01.300166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.392 [2024-07-15 21:20:01.300865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.392 [2024-07-15 21:20:01.300901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.392 [2024-07-15 21:20:01.300912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.392 [2024-07-15 21:20:01.301151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.392 [2024-07-15 21:20:01.301383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.392 [2024-07-15 21:20:01.301392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.392 [2024-07-15 21:20:01.301399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.392 [2024-07-15 21:20:01.304953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.392 [2024-07-15 21:20:01.314175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.392 [2024-07-15 21:20:01.314849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.392 [2024-07-15 21:20:01.314886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.392 [2024-07-15 21:20:01.314896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.392 [2024-07-15 21:20:01.315136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.392 [2024-07-15 21:20:01.315371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.392 [2024-07-15 21:20:01.315379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.392 [2024-07-15 21:20:01.315387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.392 [2024-07-15 21:20:01.318942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.392 [2024-07-15 21:20:01.328157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.392 [2024-07-15 21:20:01.328872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.392 [2024-07-15 21:20:01.328910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.392 [2024-07-15 21:20:01.328921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.392 [2024-07-15 21:20:01.329164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.392 [2024-07-15 21:20:01.329397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.392 [2024-07-15 21:20:01.329405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.392 [2024-07-15 21:20:01.329413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.392 [2024-07-15 21:20:01.332968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.392 [2024-07-15 21:20:01.341977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.392 [2024-07-15 21:20:01.342671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.392 [2024-07-15 21:20:01.342707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.392 [2024-07-15 21:20:01.342718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.392 [2024-07-15 21:20:01.342957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.392 [2024-07-15 21:20:01.343180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.392 [2024-07-15 21:20:01.343188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.392 [2024-07-15 21:20:01.343196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.392 [2024-07-15 21:20:01.346760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.392 [2024-07-15 21:20:01.355970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.392 [2024-07-15 21:20:01.356599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.392 [2024-07-15 21:20:01.356617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.392 [2024-07-15 21:20:01.356624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.393 [2024-07-15 21:20:01.356848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.393 [2024-07-15 21:20:01.357067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.393 [2024-07-15 21:20:01.357075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.393 [2024-07-15 21:20:01.357081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.393 [2024-07-15 21:20:01.360646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.393 [2024-07-15 21:20:01.369856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.393 [2024-07-15 21:20:01.370389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.393 [2024-07-15 21:20:01.370405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.393 [2024-07-15 21:20:01.370413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.393 [2024-07-15 21:20:01.370632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.393 [2024-07-15 21:20:01.370850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.393 [2024-07-15 21:20:01.370858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.393 [2024-07-15 21:20:01.370865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.393 [2024-07-15 21:20:01.374413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.393 [2024-07-15 21:20:01.383818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.393 [2024-07-15 21:20:01.384511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.393 [2024-07-15 21:20:01.384548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.393 [2024-07-15 21:20:01.384559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.393 [2024-07-15 21:20:01.384798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.393 [2024-07-15 21:20:01.385021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.393 [2024-07-15 21:20:01.385029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.393 [2024-07-15 21:20:01.385037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.393 [2024-07-15 21:20:01.388596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.393 [2024-07-15 21:20:01.397800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.393 [2024-07-15 21:20:01.398479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.393 [2024-07-15 21:20:01.398516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.393 [2024-07-15 21:20:01.398526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.393 [2024-07-15 21:20:01.398765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.393 [2024-07-15 21:20:01.398989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.393 [2024-07-15 21:20:01.398997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.393 [2024-07-15 21:20:01.399008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.393 [2024-07-15 21:20:01.402571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.393 [2024-07-15 21:20:01.411774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.393 [2024-07-15 21:20:01.412439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.393 [2024-07-15 21:20:01.412477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.393 [2024-07-15 21:20:01.412488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.393 [2024-07-15 21:20:01.412727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.393 [2024-07-15 21:20:01.412949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.393 [2024-07-15 21:20:01.412958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.393 [2024-07-15 21:20:01.412965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.393 [2024-07-15 21:20:01.416534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.393 [2024-07-15 21:20:01.425742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.393 [2024-07-15 21:20:01.426445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.393 [2024-07-15 21:20:01.426482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.393 [2024-07-15 21:20:01.426493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.393 [2024-07-15 21:20:01.426732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.393 [2024-07-15 21:20:01.426954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.393 [2024-07-15 21:20:01.426962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.393 [2024-07-15 21:20:01.426971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.393 [2024-07-15 21:20:01.430533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.393 [2024-07-15 21:20:01.439539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.393 [2024-07-15 21:20:01.440253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.393 [2024-07-15 21:20:01.440290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.393 [2024-07-15 21:20:01.440302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.393 [2024-07-15 21:20:01.440542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.393 [2024-07-15 21:20:01.440765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.393 [2024-07-15 21:20:01.440773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.393 [2024-07-15 21:20:01.440781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.393 [2024-07-15 21:20:01.444338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.393 [2024-07-15 21:20:01.453337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.393 [2024-07-15 21:20:01.453996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.393 [2024-07-15 21:20:01.454037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.393 [2024-07-15 21:20:01.454047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.393 [2024-07-15 21:20:01.454297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.393 [2024-07-15 21:20:01.454521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.393 [2024-07-15 21:20:01.454529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.393 [2024-07-15 21:20:01.454537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.393 [2024-07-15 21:20:01.458100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.393 [2024-07-15 21:20:01.467332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.393 [2024-07-15 21:20:01.467957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.393 [2024-07-15 21:20:01.467975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.393 [2024-07-15 21:20:01.467983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.393 [2024-07-15 21:20:01.468204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.393 [2024-07-15 21:20:01.468430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.393 [2024-07-15 21:20:01.468438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.393 [2024-07-15 21:20:01.468445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.393 [2024-07-15 21:20:01.471996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.393 [2024-07-15 21:20:01.481212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.393 [2024-07-15 21:20:01.481711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.393 [2024-07-15 21:20:01.481726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.393 [2024-07-15 21:20:01.481734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.393 [2024-07-15 21:20:01.481953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.393 [2024-07-15 21:20:01.482171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.393 [2024-07-15 21:20:01.482179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.393 [2024-07-15 21:20:01.482186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.393 [2024-07-15 21:20:01.485740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.393 [2024-07-15 21:20:01.495164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.393 [2024-07-15 21:20:01.495716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.393 [2024-07-15 21:20:01.495730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.393 [2024-07-15 21:20:01.495738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.393 [2024-07-15 21:20:01.495957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.393 [2024-07-15 21:20:01.496182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.393 [2024-07-15 21:20:01.496191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.393 [2024-07-15 21:20:01.496198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.393 [2024-07-15 21:20:01.499755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.393 [2024-07-15 21:20:01.508973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.393 [2024-07-15 21:20:01.509533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.393 [2024-07-15 21:20:01.509549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.393 [2024-07-15 21:20:01.509556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.394 [2024-07-15 21:20:01.509775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.394 [2024-07-15 21:20:01.509994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.394 [2024-07-15 21:20:01.510001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.394 [2024-07-15 21:20:01.510008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.394 [2024-07-15 21:20:01.513563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.394 [2024-07-15 21:20:01.522788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.394 [2024-07-15 21:20:01.523238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.394 [2024-07-15 21:20:01.523254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.394 [2024-07-15 21:20:01.523261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.394 [2024-07-15 21:20:01.523480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.394 [2024-07-15 21:20:01.523698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.394 [2024-07-15 21:20:01.523707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.394 [2024-07-15 21:20:01.523713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.394 [2024-07-15 21:20:01.527270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.394 [2024-07-15 21:20:01.536697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.394 [2024-07-15 21:20:01.537438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.394 [2024-07-15 21:20:01.537475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.394 [2024-07-15 21:20:01.537485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.394 [2024-07-15 21:20:01.537725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.394 [2024-07-15 21:20:01.537948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.394 [2024-07-15 21:20:01.537956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.394 [2024-07-15 21:20:01.537963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.394 [2024-07-15 21:20:01.541528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.394 [2024-07-15 21:20:01.550541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.394 [2024-07-15 21:20:01.551154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.394 [2024-07-15 21:20:01.551172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.394 [2024-07-15 21:20:01.551180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.394 [2024-07-15 21:20:01.551406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.394 [2024-07-15 21:20:01.551626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.394 [2024-07-15 21:20:01.551633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.394 [2024-07-15 21:20:01.551640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.394 [2024-07-15 21:20:01.555187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.394 [2024-07-15 21:20:01.564414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.394 [2024-07-15 21:20:01.564979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.394 [2024-07-15 21:20:01.564993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.394 [2024-07-15 21:20:01.565001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.394 [2024-07-15 21:20:01.565220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.394 [2024-07-15 21:20:01.565444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.394 [2024-07-15 21:20:01.565453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.394 [2024-07-15 21:20:01.565460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.394 [2024-07-15 21:20:01.569007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.394 [2024-07-15 21:20:01.578224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.394 [2024-07-15 21:20:01.578887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.394 [2024-07-15 21:20:01.578924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.394 [2024-07-15 21:20:01.578935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.394 [2024-07-15 21:20:01.579175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.394 [2024-07-15 21:20:01.579409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.394 [2024-07-15 21:20:01.579419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.394 [2024-07-15 21:20:01.579426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.394 [2024-07-15 21:20:01.582983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.394 [2024-07-15 21:20:01.592206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.394 [2024-07-15 21:20:01.592797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.394 [2024-07-15 21:20:01.592815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.394 [2024-07-15 21:20:01.592828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.394 [2024-07-15 21:20:01.593048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.394 [2024-07-15 21:20:01.593274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.394 [2024-07-15 21:20:01.593282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.394 [2024-07-15 21:20:01.593289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.394 [2024-07-15 21:20:01.596837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.394 [2024-07-15 21:20:01.606088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.394 [2024-07-15 21:20:01.606692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.394 [2024-07-15 21:20:01.606729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.394 [2024-07-15 21:20:01.606741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.394 [2024-07-15 21:20:01.606984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.394 [2024-07-15 21:20:01.607207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.394 [2024-07-15 21:20:01.607216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.394 [2024-07-15 21:20:01.607224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.394 [2024-07-15 21:20:01.610792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.394 [2024-07-15 21:20:01.620020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.394 [2024-07-15 21:20:01.620730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.394 [2024-07-15 21:20:01.620767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.394 [2024-07-15 21:20:01.620777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.394 [2024-07-15 21:20:01.621017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.394 [2024-07-15 21:20:01.621249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.394 [2024-07-15 21:20:01.621258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.394 [2024-07-15 21:20:01.621266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.394 [2024-07-15 21:20:01.624831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.394 [2024-07-15 21:20:01.633845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.394 [2024-07-15 21:20:01.634545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.394 [2024-07-15 21:20:01.634583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.394 [2024-07-15 21:20:01.634594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.394 [2024-07-15 21:20:01.634834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.394 [2024-07-15 21:20:01.635056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.394 [2024-07-15 21:20:01.635069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.394 [2024-07-15 21:20:01.635076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.394 [2024-07-15 21:20:01.638646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.394 [2024-07-15 21:20:01.647667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.394 [2024-07-15 21:20:01.648175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.394 [2024-07-15 21:20:01.648193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.394 [2024-07-15 21:20:01.648200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.394 [2024-07-15 21:20:01.648425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.394 [2024-07-15 21:20:01.648645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.394 [2024-07-15 21:20:01.648652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.394 [2024-07-15 21:20:01.648659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.394 [2024-07-15 21:20:01.652208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.394 [2024-07-15 21:20:01.661651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.394 [2024-07-15 21:20:01.662279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.394 [2024-07-15 21:20:01.662302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.394 [2024-07-15 21:20:01.662310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.395 [2024-07-15 21:20:01.662534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.395 [2024-07-15 21:20:01.662754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.395 [2024-07-15 21:20:01.662763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.395 [2024-07-15 21:20:01.662770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.395 [2024-07-15 21:20:01.666330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.657 [2024-07-15 21:20:01.675552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.657 [2024-07-15 21:20:01.676209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.657 [2024-07-15 21:20:01.676253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.657 [2024-07-15 21:20:01.676266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.657 [2024-07-15 21:20:01.676506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.657 [2024-07-15 21:20:01.676729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.657 [2024-07-15 21:20:01.676737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.657 [2024-07-15 21:20:01.676745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.657 [2024-07-15 21:20:01.680308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.657 [2024-07-15 21:20:01.689544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.657 [2024-07-15 21:20:01.690119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.657 [2024-07-15 21:20:01.690137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.657 [2024-07-15 21:20:01.690144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.657 [2024-07-15 21:20:01.690371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.657 [2024-07-15 21:20:01.690591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.657 [2024-07-15 21:20:01.690599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.657 [2024-07-15 21:20:01.690606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.657 [2024-07-15 21:20:01.694155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.657 [2024-07-15 21:20:01.703380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 21:20:01.703986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 21:20:01.704001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 21:20:01.704009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.658 [2024-07-15 21:20:01.704228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.658 [2024-07-15 21:20:01.704453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 21:20:01.704460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 21:20:01.704467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.658 [2024-07-15 21:20:01.708017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.658 [2024-07-15 21:20:01.717241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 21:20:01.717808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 21:20:01.717823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 21:20:01.717831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.658 [2024-07-15 21:20:01.718049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.658 [2024-07-15 21:20:01.718275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 21:20:01.718283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 21:20:01.718290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.658 [2024-07-15 21:20:01.721840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.658 [2024-07-15 21:20:01.731066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 21:20:01.731728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 21:20:01.731765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 21:20:01.731775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.658 [2024-07-15 21:20:01.732020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.658 [2024-07-15 21:20:01.732252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 21:20:01.732261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 21:20:01.732269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.658 [2024-07-15 21:20:01.735826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.658 [2024-07-15 21:20:01.745055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 21:20:01.745682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 21:20:01.745701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 21:20:01.745708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.658 [2024-07-15 21:20:01.745928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.658 [2024-07-15 21:20:01.746147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 21:20:01.746154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 21:20:01.746161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.658 [2024-07-15 21:20:01.749718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.658 [2024-07-15 21:20:01.758946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 21:20:01.759522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 21:20:01.759538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 21:20:01.759546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.658 [2024-07-15 21:20:01.759765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.658 [2024-07-15 21:20:01.759984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 21:20:01.759992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 21:20:01.759999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.658 [2024-07-15 21:20:01.763555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.658 [2024-07-15 21:20:01.772778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 21:20:01.773458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 21:20:01.773495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 21:20:01.773506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.658 [2024-07-15 21:20:01.773745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.658 [2024-07-15 21:20:01.773968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 21:20:01.773976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 21:20:01.773988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.658 [2024-07-15 21:20:01.777554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.658 [2024-07-15 21:20:01.786781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 21:20:01.787368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 21:20:01.787387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 21:20:01.787395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.658 [2024-07-15 21:20:01.787614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.658 [2024-07-15 21:20:01.787833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 21:20:01.787841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 21:20:01.787848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.658 [2024-07-15 21:20:01.791406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.658 [2024-07-15 21:20:01.800628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 21:20:01.801325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 21:20:01.801362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 21:20:01.801375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.658 [2024-07-15 21:20:01.801618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.658 [2024-07-15 21:20:01.801840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 21:20:01.801849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 21:20:01.801857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.658 [2024-07-15 21:20:01.805417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.658 [2024-07-15 21:20:01.814468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 21:20:01.815088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 21:20:01.815105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 21:20:01.815113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.658 [2024-07-15 21:20:01.815338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.658 [2024-07-15 21:20:01.815558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 21:20:01.815565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 21:20:01.815572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.658 [2024-07-15 21:20:01.819118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.658 [2024-07-15 21:20:01.828347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 21:20:01.829018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 21:20:01.829055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 21:20:01.829066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.658 [2024-07-15 21:20:01.829313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.658 [2024-07-15 21:20:01.829538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 21:20:01.829546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 21:20:01.829553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.658 [2024-07-15 21:20:01.833118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.658 [2024-07-15 21:20:01.842347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 21:20:01.842973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 21:20:01.842991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 21:20:01.842998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.658 [2024-07-15 21:20:01.843218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.658 [2024-07-15 21:20:01.843445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 21:20:01.843454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 21:20:01.843461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.659 [2024-07-15 21:20:01.847012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.659 [2024-07-15 21:20:01.856239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.659 [2024-07-15 21:20:01.856907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.659 [2024-07-15 21:20:01.856945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.659 [2024-07-15 21:20:01.856955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.659 [2024-07-15 21:20:01.857194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.659 [2024-07-15 21:20:01.857435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.659 [2024-07-15 21:20:01.857445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.659 [2024-07-15 21:20:01.857453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.659 [2024-07-15 21:20:01.861010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.659 [2024-07-15 21:20:01.870239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.659 [2024-07-15 21:20:01.870815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.659 [2024-07-15 21:20:01.870833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.659 [2024-07-15 21:20:01.870840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.659 [2024-07-15 21:20:01.871064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.659 [2024-07-15 21:20:01.871290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.659 [2024-07-15 21:20:01.871298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.659 [2024-07-15 21:20:01.871305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.659 [2024-07-15 21:20:01.874857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.659 [2024-07-15 21:20:01.884078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.659 [2024-07-15 21:20:01.884739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.659 [2024-07-15 21:20:01.884776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.659 [2024-07-15 21:20:01.884787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.659 [2024-07-15 21:20:01.885026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.659 [2024-07-15 21:20:01.885259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.659 [2024-07-15 21:20:01.885268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.659 [2024-07-15 21:20:01.885275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.659 [2024-07-15 21:20:01.888833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.659 [2024-07-15 21:20:01.898058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.659 [2024-07-15 21:20:01.898683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.659 [2024-07-15 21:20:01.898701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.659 [2024-07-15 21:20:01.898709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.659 [2024-07-15 21:20:01.898928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.659 [2024-07-15 21:20:01.899148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.659 [2024-07-15 21:20:01.899156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.659 [2024-07-15 21:20:01.899162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.659 [2024-07-15 21:20:01.902718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.659 [2024-07-15 21:20:01.911938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.659 [2024-07-15 21:20:01.912522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.659 [2024-07-15 21:20:01.912538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.659 [2024-07-15 21:20:01.912546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.659 [2024-07-15 21:20:01.912765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.659 [2024-07-15 21:20:01.912984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.659 [2024-07-15 21:20:01.912992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.659 [2024-07-15 21:20:01.912999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.659 [2024-07-15 21:20:01.916556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.659 [2024-07-15 21:20:01.925774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.659 [2024-07-15 21:20:01.926471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.659 [2024-07-15 21:20:01.926508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.659 [2024-07-15 21:20:01.926519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.659 [2024-07-15 21:20:01.926758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.659 [2024-07-15 21:20:01.926981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.659 [2024-07-15 21:20:01.926989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.659 [2024-07-15 21:20:01.926997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.659 [2024-07-15 21:20:01.930558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.659 [2024-07-15 21:20:01.939776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.659 [2024-07-15 21:20:01.940516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.659 [2024-07-15 21:20:01.940554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.659 [2024-07-15 21:20:01.940565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.659 [2024-07-15 21:20:01.940804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.659 [2024-07-15 21:20:01.941027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.659 [2024-07-15 21:20:01.941035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.659 [2024-07-15 21:20:01.941043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.659 [2024-07-15 21:20:01.944605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.921 [2024-07-15 21:20:01.953609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.921 [2024-07-15 21:20:01.954317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.921 [2024-07-15 21:20:01.954355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.921 [2024-07-15 21:20:01.954366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.921 [2024-07-15 21:20:01.954604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.921 [2024-07-15 21:20:01.954827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.921 [2024-07-15 21:20:01.954835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.921 [2024-07-15 21:20:01.954843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.921 [2024-07-15 21:20:01.958410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.921 [2024-07-15 21:20:01.967413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.921 [2024-07-15 21:20:01.968031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.921 [2024-07-15 21:20:01.968054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.921 [2024-07-15 21:20:01.968062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.921 [2024-07-15 21:20:01.968287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.921 [2024-07-15 21:20:01.968507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.921 [2024-07-15 21:20:01.968515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.921 [2024-07-15 21:20:01.968522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.921 [2024-07-15 21:20:01.972068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.921 [2024-07-15 21:20:01.981283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.921 [2024-07-15 21:20:01.981894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.921 [2024-07-15 21:20:01.981910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.921 [2024-07-15 21:20:01.981917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.921 [2024-07-15 21:20:01.982136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.921 [2024-07-15 21:20:01.982362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.921 [2024-07-15 21:20:01.982371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.921 [2024-07-15 21:20:01.982378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.921 [2024-07-15 21:20:01.985925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.921 [2024-07-15 21:20:01.995132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.921 [2024-07-15 21:20:01.995786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.921 [2024-07-15 21:20:01.995823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.921 [2024-07-15 21:20:01.995835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.921 [2024-07-15 21:20:01.996074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.921 [2024-07-15 21:20:01.996304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.921 [2024-07-15 21:20:01.996313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.921 [2024-07-15 21:20:01.996320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.921 [2024-07-15 21:20:01.999877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.921 [2024-07-15 21:20:02.009092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.921 [2024-07-15 21:20:02.009666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.921 [2024-07-15 21:20:02.009684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.921 [2024-07-15 21:20:02.009692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.921 [2024-07-15 21:20:02.009912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.921 [2024-07-15 21:20:02.010135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.921 [2024-07-15 21:20:02.010144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.921 [2024-07-15 21:20:02.010151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.921 [2024-07-15 21:20:02.013705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.921 [2024-07-15 21:20:02.022914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.921 [2024-07-15 21:20:02.023401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.921 [2024-07-15 21:20:02.023417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.921 [2024-07-15 21:20:02.023424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.921 [2024-07-15 21:20:02.023643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.921 [2024-07-15 21:20:02.023862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.921 [2024-07-15 21:20:02.023870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.921 [2024-07-15 21:20:02.023877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.921 [2024-07-15 21:20:02.027428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.921 [2024-07-15 21:20:02.036846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.921 [2024-07-15 21:20:02.037536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.921 [2024-07-15 21:20:02.037573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.921 [2024-07-15 21:20:02.037584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.921 [2024-07-15 21:20:02.037823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.921 [2024-07-15 21:20:02.038046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.921 [2024-07-15 21:20:02.038054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.921 [2024-07-15 21:20:02.038062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.921 [2024-07-15 21:20:02.041632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.921 [2024-07-15 21:20:02.050932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.921 [2024-07-15 21:20:02.051616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.921 [2024-07-15 21:20:02.051654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.921 [2024-07-15 21:20:02.051665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.921 [2024-07-15 21:20:02.051904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.921 [2024-07-15 21:20:02.052127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.921 [2024-07-15 21:20:02.052135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.921 [2024-07-15 21:20:02.052142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.921 [2024-07-15 21:20:02.055704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.921 [2024-07-15 21:20:02.064968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.921 [2024-07-15 21:20:02.065667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.921 [2024-07-15 21:20:02.065704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.921 [2024-07-15 21:20:02.065715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.921 [2024-07-15 21:20:02.065955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.921 [2024-07-15 21:20:02.066178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.921 [2024-07-15 21:20:02.066186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.921 [2024-07-15 21:20:02.066193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.921 [2024-07-15 21:20:02.069756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.921 [2024-07-15 21:20:02.078979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.921 [2024-07-15 21:20:02.079648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.921 [2024-07-15 21:20:02.079685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.921 [2024-07-15 21:20:02.079696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.921 [2024-07-15 21:20:02.079935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.922 [2024-07-15 21:20:02.080158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.922 [2024-07-15 21:20:02.080167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.922 [2024-07-15 21:20:02.080174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.922 [2024-07-15 21:20:02.083741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.922 [2024-07-15 21:20:02.092959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.922 [2024-07-15 21:20:02.093671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.922 [2024-07-15 21:20:02.093707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.922 [2024-07-15 21:20:02.093718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.922 [2024-07-15 21:20:02.093957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.922 [2024-07-15 21:20:02.094180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.922 [2024-07-15 21:20:02.094189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.922 [2024-07-15 21:20:02.094197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.922 [2024-07-15 21:20:02.097758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.922 [2024-07-15 21:20:02.106766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.922 [2024-07-15 21:20:02.107473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.922 [2024-07-15 21:20:02.107511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.922 [2024-07-15 21:20:02.107526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.922 [2024-07-15 21:20:02.107765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.922 [2024-07-15 21:20:02.107988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.922 [2024-07-15 21:20:02.107996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.922 [2024-07-15 21:20:02.108004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.922 [2024-07-15 21:20:02.111565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.922 [2024-07-15 21:20:02.120577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.922 [2024-07-15 21:20:02.121156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.922 [2024-07-15 21:20:02.121174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.922 [2024-07-15 21:20:02.121181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.922 [2024-07-15 21:20:02.121406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.922 [2024-07-15 21:20:02.121626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.922 [2024-07-15 21:20:02.121633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.922 [2024-07-15 21:20:02.121641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.922 [2024-07-15 21:20:02.125185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.922 [2024-07-15 21:20:02.134400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.922 [2024-07-15 21:20:02.134949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.922 [2024-07-15 21:20:02.134964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.922 [2024-07-15 21:20:02.134971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.922 [2024-07-15 21:20:02.135191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.922 [2024-07-15 21:20:02.135415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.922 [2024-07-15 21:20:02.135424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.922 [2024-07-15 21:20:02.135430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.922 [2024-07-15 21:20:02.138973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.922 [2024-07-15 21:20:02.148187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.922 [2024-07-15 21:20:02.148845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.922 [2024-07-15 21:20:02.148882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.922 [2024-07-15 21:20:02.148893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.922 [2024-07-15 21:20:02.149132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.922 [2024-07-15 21:20:02.149363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.922 [2024-07-15 21:20:02.149376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.922 [2024-07-15 21:20:02.149384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.922 [2024-07-15 21:20:02.153147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.922 [2024-07-15 21:20:02.162171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.922 [2024-07-15 21:20:02.162885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.922 [2024-07-15 21:20:02.162922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.922 [2024-07-15 21:20:02.162932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.922 [2024-07-15 21:20:02.163172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.922 [2024-07-15 21:20:02.163403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.922 [2024-07-15 21:20:02.163412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.922 [2024-07-15 21:20:02.163420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.922 [2024-07-15 21:20:02.166974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.922 [2024-07-15 21:20:02.175983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.922 [2024-07-15 21:20:02.176695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.922 [2024-07-15 21:20:02.176732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.922 [2024-07-15 21:20:02.176743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.922 [2024-07-15 21:20:02.176983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.922 [2024-07-15 21:20:02.177206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.922 [2024-07-15 21:20:02.177214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.922 [2024-07-15 21:20:02.177222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.922 [2024-07-15 21:20:02.180779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.922 [2024-07-15 21:20:02.189784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.922 [2024-07-15 21:20:02.190532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.922 [2024-07-15 21:20:02.190570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.922 [2024-07-15 21:20:02.190581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.922 [2024-07-15 21:20:02.190820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.922 [2024-07-15 21:20:02.191043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.922 [2024-07-15 21:20:02.191051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.922 [2024-07-15 21:20:02.191058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.922 [2024-07-15 21:20:02.194620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.922 [2024-07-15 21:20:02.203628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.922 [2024-07-15 21:20:02.204254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.922 [2024-07-15 21:20:02.204272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:34.922 [2024-07-15 21:20:02.204280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:34.922 [2024-07-15 21:20:02.204500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:34.922 [2024-07-15 21:20:02.204719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.922 [2024-07-15 21:20:02.204728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.922 [2024-07-15 21:20:02.204735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.922 [2024-07-15 21:20:02.208286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.183 [2024-07-15 21:20:02.217499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.183 [2024-07-15 21:20:02.218102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.183 [2024-07-15 21:20:02.218118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.183 [2024-07-15 21:20:02.218126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.183 [2024-07-15 21:20:02.218349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.183 [2024-07-15 21:20:02.218569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.183 [2024-07-15 21:20:02.218576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.183 [2024-07-15 21:20:02.218583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.183 [2024-07-15 21:20:02.222128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.183 [2024-07-15 21:20:02.231340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.183 [2024-07-15 21:20:02.232032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.183 [2024-07-15 21:20:02.232069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.183 [2024-07-15 21:20:02.232080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.183 [2024-07-15 21:20:02.232327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.183 [2024-07-15 21:20:02.232551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.184 [2024-07-15 21:20:02.232559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.184 [2024-07-15 21:20:02.232567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.184 [2024-07-15 21:20:02.236119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.184 [2024-07-15 21:20:02.245331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.184 [2024-07-15 21:20:02.245953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.184 [2024-07-15 21:20:02.245970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.184 [2024-07-15 21:20:02.245978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.184 [2024-07-15 21:20:02.246202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.184 [2024-07-15 21:20:02.246427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.184 [2024-07-15 21:20:02.246435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.184 [2024-07-15 21:20:02.246442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.184 [2024-07-15 21:20:02.249994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.184 [2024-07-15 21:20:02.259240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.184 [2024-07-15 21:20:02.259936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.184 [2024-07-15 21:20:02.259973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.184 [2024-07-15 21:20:02.259984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.184 [2024-07-15 21:20:02.260223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.184 [2024-07-15 21:20:02.260464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.184 [2024-07-15 21:20:02.260473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.184 [2024-07-15 21:20:02.260480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.184 [2024-07-15 21:20:02.264032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.184 [2024-07-15 21:20:02.273036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.184 [2024-07-15 21:20:02.273669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.184 [2024-07-15 21:20:02.273687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.184 [2024-07-15 21:20:02.273695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.184 [2024-07-15 21:20:02.273915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.184 [2024-07-15 21:20:02.274134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.184 [2024-07-15 21:20:02.274142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.184 [2024-07-15 21:20:02.274149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.184 [2024-07-15 21:20:02.277697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.184 [2024-07-15 21:20:02.286920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.184 [2024-07-15 21:20:02.287460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.184 [2024-07-15 21:20:02.287477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.184 [2024-07-15 21:20:02.287484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.184 [2024-07-15 21:20:02.287703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.184 [2024-07-15 21:20:02.287922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.184 [2024-07-15 21:20:02.287929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.184 [2024-07-15 21:20:02.287941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.184 [2024-07-15 21:20:02.291494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.184 [2024-07-15 21:20:02.300922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.184 [2024-07-15 21:20:02.301477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.184 [2024-07-15 21:20:02.301493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.184 [2024-07-15 21:20:02.301500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.184 [2024-07-15 21:20:02.301719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.184 [2024-07-15 21:20:02.301938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.184 [2024-07-15 21:20:02.301945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.184 [2024-07-15 21:20:02.301952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.184 [2024-07-15 21:20:02.305506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.184 [2024-07-15 21:20:02.314721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.184 [2024-07-15 21:20:02.315324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.184 [2024-07-15 21:20:02.315339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.184 [2024-07-15 21:20:02.315346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.184 [2024-07-15 21:20:02.315565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.184 [2024-07-15 21:20:02.315784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.184 [2024-07-15 21:20:02.315792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.184 [2024-07-15 21:20:02.315798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.184 [2024-07-15 21:20:02.319391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.184 [2024-07-15 21:20:02.328619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.184 [2024-07-15 21:20:02.329182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.184 [2024-07-15 21:20:02.329196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.184 [2024-07-15 21:20:02.329204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.184 [2024-07-15 21:20:02.329429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.184 [2024-07-15 21:20:02.329649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.184 [2024-07-15 21:20:02.329656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.184 [2024-07-15 21:20:02.329663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.184 [2024-07-15 21:20:02.333211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.184 [2024-07-15 21:20:02.342425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.184 [2024-07-15 21:20:02.343128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.184 [2024-07-15 21:20:02.343164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.184 [2024-07-15 21:20:02.343175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.184 [2024-07-15 21:20:02.343422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.184 [2024-07-15 21:20:02.343646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.184 [2024-07-15 21:20:02.343654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.184 [2024-07-15 21:20:02.343662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.184 [2024-07-15 21:20:02.347212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.184 [2024-07-15 21:20:02.356216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.184 [2024-07-15 21:20:02.356837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.184 [2024-07-15 21:20:02.356874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.184 [2024-07-15 21:20:02.356885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.184 [2024-07-15 21:20:02.357124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.184 [2024-07-15 21:20:02.357355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.184 [2024-07-15 21:20:02.357364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.184 [2024-07-15 21:20:02.357371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.184 [2024-07-15 21:20:02.360933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.184 [2024-07-15 21:20:02.370145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.184 [2024-07-15 21:20:02.370803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.184 [2024-07-15 21:20:02.370840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.184 [2024-07-15 21:20:02.370851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.184 [2024-07-15 21:20:02.371090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.184 [2024-07-15 21:20:02.371319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.184 [2024-07-15 21:20:02.371328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.184 [2024-07-15 21:20:02.371335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.184 [2024-07-15 21:20:02.374888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.184 [2024-07-15 21:20:02.384101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.184 [2024-07-15 21:20:02.384762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.184 [2024-07-15 21:20:02.384800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.184 [2024-07-15 21:20:02.384811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.184 [2024-07-15 21:20:02.385050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.185 [2024-07-15 21:20:02.385285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.185 [2024-07-15 21:20:02.385294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.185 [2024-07-15 21:20:02.385302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.185 [2024-07-15 21:20:02.388859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.185 [2024-07-15 21:20:02.398071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.185 [2024-07-15 21:20:02.398755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.185 [2024-07-15 21:20:02.398792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.185 [2024-07-15 21:20:02.398803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.185 [2024-07-15 21:20:02.399042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.185 [2024-07-15 21:20:02.399274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.185 [2024-07-15 21:20:02.399284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.185 [2024-07-15 21:20:02.399291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.185 [2024-07-15 21:20:02.402844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.185 [2024-07-15 21:20:02.412050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.185 [2024-07-15 21:20:02.412740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.185 [2024-07-15 21:20:02.412777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.185 [2024-07-15 21:20:02.412788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.185 [2024-07-15 21:20:02.413026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.185 [2024-07-15 21:20:02.413257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.185 [2024-07-15 21:20:02.413266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.185 [2024-07-15 21:20:02.413274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.185 [2024-07-15 21:20:02.416826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.185 [2024-07-15 21:20:02.426036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.185 [2024-07-15 21:20:02.426722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.185 [2024-07-15 21:20:02.426759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.185 [2024-07-15 21:20:02.426770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.185 [2024-07-15 21:20:02.427009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.185 [2024-07-15 21:20:02.427241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.185 [2024-07-15 21:20:02.427250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.185 [2024-07-15 21:20:02.427258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.185 [2024-07-15 21:20:02.430815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.185 [2024-07-15 21:20:02.440034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.185 [2024-07-15 21:20:02.440618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.185 [2024-07-15 21:20:02.440636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.185 [2024-07-15 21:20:02.440644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.185 [2024-07-15 21:20:02.440864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.185 [2024-07-15 21:20:02.441083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.185 [2024-07-15 21:20:02.441091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.185 [2024-07-15 21:20:02.441098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.185 [2024-07-15 21:20:02.444649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.185 [2024-07-15 21:20:02.453852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.185 [2024-07-15 21:20:02.454419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.185 [2024-07-15 21:20:02.454435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.185 [2024-07-15 21:20:02.454443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.185 [2024-07-15 21:20:02.454661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.185 [2024-07-15 21:20:02.454880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.185 [2024-07-15 21:20:02.454888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.185 [2024-07-15 21:20:02.454895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.185 [2024-07-15 21:20:02.458451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.185 [2024-07-15 21:20:02.467664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.185 [2024-07-15 21:20:02.468237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.185 [2024-07-15 21:20:02.468252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.185 [2024-07-15 21:20:02.468260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.185 [2024-07-15 21:20:02.468478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.185 [2024-07-15 21:20:02.468697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.185 [2024-07-15 21:20:02.468705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.185 [2024-07-15 21:20:02.468712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.185 [2024-07-15 21:20:02.472261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.446 [2024-07-15 21:20:02.481460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.446 [2024-07-15 21:20:02.482098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.446 [2024-07-15 21:20:02.482135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.446 [2024-07-15 21:20:02.482150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.446 [2024-07-15 21:20:02.482398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.447 [2024-07-15 21:20:02.482622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.447 [2024-07-15 21:20:02.482630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.447 [2024-07-15 21:20:02.482638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.447 [2024-07-15 21:20:02.486190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.447 [2024-07-15 21:20:02.495400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.447 [2024-07-15 21:20:02.496079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.447 [2024-07-15 21:20:02.496116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.447 [2024-07-15 21:20:02.496127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.447 [2024-07-15 21:20:02.496374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.447 [2024-07-15 21:20:02.496598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.447 [2024-07-15 21:20:02.496606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.447 [2024-07-15 21:20:02.496613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.447 [2024-07-15 21:20:02.500165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.447 [2024-07-15 21:20:02.509380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.447 [2024-07-15 21:20:02.510071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.447 [2024-07-15 21:20:02.510108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.447 [2024-07-15 21:20:02.510119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.447 [2024-07-15 21:20:02.510367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.447 [2024-07-15 21:20:02.510591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.447 [2024-07-15 21:20:02.510599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.447 [2024-07-15 21:20:02.510607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.447 [2024-07-15 21:20:02.514159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.447 [2024-07-15 21:20:02.523373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.447 [2024-07-15 21:20:02.523920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.447 [2024-07-15 21:20:02.523938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.447 [2024-07-15 21:20:02.523945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.447 [2024-07-15 21:20:02.524164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.447 [2024-07-15 21:20:02.524394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.447 [2024-07-15 21:20:02.524403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.447 [2024-07-15 21:20:02.524410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.447 [2024-07-15 21:20:02.527959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.447 [2024-07-15 21:20:02.537167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.447 [2024-07-15 21:20:02.537863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.447 [2024-07-15 21:20:02.537899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.447 [2024-07-15 21:20:02.537910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.447 [2024-07-15 21:20:02.538149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.447 [2024-07-15 21:20:02.538379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.447 [2024-07-15 21:20:02.538387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.447 [2024-07-15 21:20:02.538395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.447 [2024-07-15 21:20:02.541950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.447 [2024-07-15 21:20:02.551159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.447 [2024-07-15 21:20:02.551652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.447 [2024-07-15 21:20:02.551671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.447 [2024-07-15 21:20:02.551678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.447 [2024-07-15 21:20:02.551898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.447 [2024-07-15 21:20:02.552117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.447 [2024-07-15 21:20:02.552124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.447 [2024-07-15 21:20:02.552131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.447 [2024-07-15 21:20:02.555683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.447 [2024-07-15 21:20:02.565112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.447 [2024-07-15 21:20:02.565683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.447 [2024-07-15 21:20:02.565698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.447 [2024-07-15 21:20:02.565706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.447 [2024-07-15 21:20:02.565925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.447 [2024-07-15 21:20:02.566143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.447 [2024-07-15 21:20:02.566152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.447 [2024-07-15 21:20:02.566159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.447 [2024-07-15 21:20:02.569735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.447 [2024-07-15 21:20:02.578945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.447 [2024-07-15 21:20:02.579549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.447 [2024-07-15 21:20:02.579565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.447 [2024-07-15 21:20:02.579573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.447 [2024-07-15 21:20:02.579792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.447 [2024-07-15 21:20:02.580011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.447 [2024-07-15 21:20:02.580019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.447 [2024-07-15 21:20:02.580026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.447 [2024-07-15 21:20:02.583575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.447 [2024-07-15 21:20:02.592777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.447 [2024-07-15 21:20:02.593335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.447 [2024-07-15 21:20:02.593350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.447 [2024-07-15 21:20:02.593358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.447 [2024-07-15 21:20:02.593577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.447 [2024-07-15 21:20:02.593795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.447 [2024-07-15 21:20:02.593810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.447 [2024-07-15 21:20:02.593817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.447 [2024-07-15 21:20:02.597363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.447 [2024-07-15 21:20:02.606564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.447 [2024-07-15 21:20:02.607084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.447 [2024-07-15 21:20:02.607121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.447 [2024-07-15 21:20:02.607132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.447 [2024-07-15 21:20:02.607380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.447 [2024-07-15 21:20:02.607604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.447 [2024-07-15 21:20:02.607612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.447 [2024-07-15 21:20:02.607619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.447 [2024-07-15 21:20:02.611171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.447 [2024-07-15 21:20:02.620383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.447 [2024-07-15 21:20:02.621088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.447 [2024-07-15 21:20:02.621125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.447 [2024-07-15 21:20:02.621139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.447 [2024-07-15 21:20:02.621387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.447 [2024-07-15 21:20:02.621612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.447 [2024-07-15 21:20:02.621619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.447 [2024-07-15 21:20:02.621627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.447 [2024-07-15 21:20:02.625177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.447 [2024-07-15 21:20:02.634176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.447 [2024-07-15 21:20:02.634854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.447 [2024-07-15 21:20:02.634891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.447 [2024-07-15 21:20:02.634902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.447 [2024-07-15 21:20:02.635141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.447 [2024-07-15 21:20:02.635373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.447 [2024-07-15 21:20:02.635382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.447 [2024-07-15 21:20:02.635389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.447 [2024-07-15 21:20:02.638943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.447 [2024-07-15 21:20:02.648156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.447 [2024-07-15 21:20:02.648748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.447 [2024-07-15 21:20:02.648785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.447 [2024-07-15 21:20:02.648796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.447 [2024-07-15 21:20:02.649035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.447 [2024-07-15 21:20:02.649265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.447 [2024-07-15 21:20:02.649274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.447 [2024-07-15 21:20:02.649282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.447 [2024-07-15 21:20:02.652833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.447 [2024-07-15 21:20:02.662053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.447 [2024-07-15 21:20:02.662762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.447 [2024-07-15 21:20:02.662799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.447 [2024-07-15 21:20:02.662810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.447 [2024-07-15 21:20:02.663049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.447 [2024-07-15 21:20:02.663284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.447 [2024-07-15 21:20:02.663297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.447 [2024-07-15 21:20:02.663305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.447 [2024-07-15 21:20:02.666859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.447 [2024-07-15 21:20:02.675861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.447 [2024-07-15 21:20:02.676564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.447 [2024-07-15 21:20:02.676601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.447 [2024-07-15 21:20:02.676611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.447 [2024-07-15 21:20:02.676851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.447 [2024-07-15 21:20:02.677074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.447 [2024-07-15 21:20:02.677082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.447 [2024-07-15 21:20:02.677090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.447 [2024-07-15 21:20:02.680650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.447 [2024-07-15 21:20:02.689860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.447 [2024-07-15 21:20:02.690554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.448 [2024-07-15 21:20:02.690591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.448 [2024-07-15 21:20:02.690602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.448 [2024-07-15 21:20:02.690841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.448 [2024-07-15 21:20:02.691064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.448 [2024-07-15 21:20:02.691073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.448 [2024-07-15 21:20:02.691080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.448 [2024-07-15 21:20:02.694641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.448 [2024-07-15 21:20:02.703850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.448 [2024-07-15 21:20:02.704414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.448 [2024-07-15 21:20:02.704450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.448 [2024-07-15 21:20:02.704461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.448 [2024-07-15 21:20:02.704700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.448 [2024-07-15 21:20:02.704923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.448 [2024-07-15 21:20:02.704931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.448 [2024-07-15 21:20:02.704939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.448 [2024-07-15 21:20:02.708501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.448 [2024-07-15 21:20:02.717711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.448 [2024-07-15 21:20:02.718176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.448 [2024-07-15 21:20:02.718195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.448 [2024-07-15 21:20:02.718203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.448 [2024-07-15 21:20:02.718434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.448 [2024-07-15 21:20:02.718656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.448 [2024-07-15 21:20:02.718664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.448 [2024-07-15 21:20:02.718671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.448 [2024-07-15 21:20:02.722220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.448 [2024-07-15 21:20:02.731636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.448 [2024-07-15 21:20:02.732245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.448 [2024-07-15 21:20:02.732261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.448 [2024-07-15 21:20:02.732269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.448 [2024-07-15 21:20:02.732489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.448 [2024-07-15 21:20:02.732707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.448 [2024-07-15 21:20:02.732715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.448 [2024-07-15 21:20:02.732722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.709 [2024-07-15 21:20:02.736270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.709 [2024-07-15 21:20:02.745475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.709 [2024-07-15 21:20:02.746160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.709 [2024-07-15 21:20:02.746198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.709 [2024-07-15 21:20:02.746209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.709 [2024-07-15 21:20:02.746461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.709 [2024-07-15 21:20:02.746685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.709 [2024-07-15 21:20:02.746693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.709 [2024-07-15 21:20:02.746701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.709 [2024-07-15 21:20:02.750256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.709 [2024-07-15 21:20:02.759462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.709 [2024-07-15 21:20:02.760168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.709 [2024-07-15 21:20:02.760205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.709 [2024-07-15 21:20:02.760215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.709 [2024-07-15 21:20:02.760478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.709 [2024-07-15 21:20:02.760703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.709 [2024-07-15 21:20:02.760711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.709 [2024-07-15 21:20:02.760719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.709 [2024-07-15 21:20:02.764275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.709 [2024-07-15 21:20:02.773286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.709 [2024-07-15 21:20:02.773992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.709 [2024-07-15 21:20:02.774028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.709 [2024-07-15 21:20:02.774039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.709 [2024-07-15 21:20:02.774287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.709 [2024-07-15 21:20:02.774511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.709 [2024-07-15 21:20:02.774526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.709 [2024-07-15 21:20:02.774534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.709 [2024-07-15 21:20:02.778089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.709 [2024-07-15 21:20:02.787093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.709 [2024-07-15 21:20:02.787790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.709 [2024-07-15 21:20:02.787827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.709 [2024-07-15 21:20:02.787838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.709 [2024-07-15 21:20:02.788077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.709 [2024-07-15 21:20:02.788308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.709 [2024-07-15 21:20:02.788317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.709 [2024-07-15 21:20:02.788325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.709 [2024-07-15 21:20:02.791877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.709 [2024-07-15 21:20:02.801088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.709 [2024-07-15 21:20:02.801773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.709 [2024-07-15 21:20:02.801810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.709 [2024-07-15 21:20:02.801820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.709 [2024-07-15 21:20:02.802060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.709 [2024-07-15 21:20:02.802290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.709 [2024-07-15 21:20:02.802299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.709 [2024-07-15 21:20:02.802311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.709 [2024-07-15 21:20:02.805865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.709 [2024-07-15 21:20:02.815074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.710 [2024-07-15 21:20:02.815780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.710 [2024-07-15 21:20:02.815817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.710 [2024-07-15 21:20:02.815828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.710 [2024-07-15 21:20:02.816067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.710 [2024-07-15 21:20:02.816300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.710 [2024-07-15 21:20:02.816309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.710 [2024-07-15 21:20:02.816317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.710 [2024-07-15 21:20:02.819869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.710 [2024-07-15 21:20:02.828869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.710 [2024-07-15 21:20:02.829569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.710 [2024-07-15 21:20:02.829606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.710 [2024-07-15 21:20:02.829617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.710 [2024-07-15 21:20:02.829856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.710 [2024-07-15 21:20:02.830078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.710 [2024-07-15 21:20:02.830087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.710 [2024-07-15 21:20:02.830094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.710 [2024-07-15 21:20:02.833655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.710 [2024-07-15 21:20:02.842867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.710 [2024-07-15 21:20:02.843548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.710 [2024-07-15 21:20:02.843584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.710 [2024-07-15 21:20:02.843595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.710 [2024-07-15 21:20:02.843834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.710 [2024-07-15 21:20:02.844057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.710 [2024-07-15 21:20:02.844065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.710 [2024-07-15 21:20:02.844073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.710 [2024-07-15 21:20:02.847633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.710 [2024-07-15 21:20:02.856841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.710 [2024-07-15 21:20:02.857540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.710 [2024-07-15 21:20:02.857582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.710 [2024-07-15 21:20:02.857593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.710 [2024-07-15 21:20:02.857832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.710 [2024-07-15 21:20:02.858055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.710 [2024-07-15 21:20:02.858063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.710 [2024-07-15 21:20:02.858071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.710 [2024-07-15 21:20:02.861643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.710 [2024-07-15 21:20:02.870651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.710 [2024-07-15 21:20:02.871259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.710 [2024-07-15 21:20:02.871278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.710 [2024-07-15 21:20:02.871286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.710 [2024-07-15 21:20:02.871506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.710 [2024-07-15 21:20:02.871724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.710 [2024-07-15 21:20:02.871732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.710 [2024-07-15 21:20:02.871739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.710 [2024-07-15 21:20:02.875287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.710 [2024-07-15 21:20:02.884489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.710 [2024-07-15 21:20:02.885055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.710 [2024-07-15 21:20:02.885091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.710 [2024-07-15 21:20:02.885102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.710 [2024-07-15 21:20:02.885348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.710 [2024-07-15 21:20:02.885572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.710 [2024-07-15 21:20:02.885580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.710 [2024-07-15 21:20:02.885588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.710 [2024-07-15 21:20:02.889144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.710 [2024-07-15 21:20:02.898360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.710 [2024-07-15 21:20:02.899026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.710 [2024-07-15 21:20:02.899063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.710 [2024-07-15 21:20:02.899073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.710 [2024-07-15 21:20:02.899321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.710 [2024-07-15 21:20:02.899553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.710 [2024-07-15 21:20:02.899561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.710 [2024-07-15 21:20:02.899569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.710 [2024-07-15 21:20:02.903121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.710 [2024-07-15 21:20:02.912332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.710 [2024-07-15 21:20:02.913034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.710 [2024-07-15 21:20:02.913071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.710 [2024-07-15 21:20:02.913082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.710 [2024-07-15 21:20:02.913330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.710 [2024-07-15 21:20:02.913554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.710 [2024-07-15 21:20:02.913562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.710 [2024-07-15 21:20:02.913569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.710 [2024-07-15 21:20:02.917121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.710 [2024-07-15 21:20:02.926119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.710 [2024-07-15 21:20:02.926793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.710 [2024-07-15 21:20:02.926830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.710 [2024-07-15 21:20:02.926840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.710 [2024-07-15 21:20:02.927080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.710 [2024-07-15 21:20:02.927311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.710 [2024-07-15 21:20:02.927320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.710 [2024-07-15 21:20:02.927327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.710 [2024-07-15 21:20:02.930882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.710 [2024-07-15 21:20:02.940098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.710 [2024-07-15 21:20:02.940756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.710 [2024-07-15 21:20:02.940793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.710 [2024-07-15 21:20:02.940804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.710 [2024-07-15 21:20:02.941043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.710 [2024-07-15 21:20:02.941275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.710 [2024-07-15 21:20:02.941284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.710 [2024-07-15 21:20:02.941291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.710 [2024-07-15 21:20:02.944851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.710 [2024-07-15 21:20:02.954070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.710 [2024-07-15 21:20:02.954759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.710 [2024-07-15 21:20:02.954795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.710 [2024-07-15 21:20:02.954806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.710 [2024-07-15 21:20:02.955045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.710 [2024-07-15 21:20:02.955276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.710 [2024-07-15 21:20:02.955285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.711 [2024-07-15 21:20:02.955293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.711 [2024-07-15 21:20:02.958846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.711 [2024-07-15 21:20:02.968065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.711 [2024-07-15 21:20:02.968775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.711 [2024-07-15 21:20:02.968812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.711 [2024-07-15 21:20:02.968823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.711 [2024-07-15 21:20:02.969062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.711 [2024-07-15 21:20:02.969293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.711 [2024-07-15 21:20:02.969302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.711 [2024-07-15 21:20:02.969309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.711 [2024-07-15 21:20:02.972862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.711 [2024-07-15 21:20:02.981865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.711 [2024-07-15 21:20:02.982557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.711 [2024-07-15 21:20:02.982593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.711 [2024-07-15 21:20:02.982604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.711 [2024-07-15 21:20:02.982843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.711 [2024-07-15 21:20:02.983066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.711 [2024-07-15 21:20:02.983074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.711 [2024-07-15 21:20:02.983082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.711 [2024-07-15 21:20:02.986641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.711 [2024-07-15 21:20:02.995855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.711 [2024-07-15 21:20:02.996539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.711 [2024-07-15 21:20:02.996577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.711 [2024-07-15 21:20:02.996591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.711 [2024-07-15 21:20:02.996831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.711 [2024-07-15 21:20:02.997054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.711 [2024-07-15 21:20:02.997062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.711 [2024-07-15 21:20:02.997070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.972 [2024-07-15 21:20:03.000636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.972 [2024-07-15 21:20:03.009857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.972 [2024-07-15 21:20:03.010558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.972 [2024-07-15 21:20:03.010595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.972 [2024-07-15 21:20:03.010606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.972 [2024-07-15 21:20:03.010845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.972 [2024-07-15 21:20:03.011068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.972 [2024-07-15 21:20:03.011076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.972 [2024-07-15 21:20:03.011084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.972 [2024-07-15 21:20:03.014645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.972 [2024-07-15 21:20:03.023859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.972 [2024-07-15 21:20:03.024440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.972 [2024-07-15 21:20:03.024458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.972 [2024-07-15 21:20:03.024466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.972 [2024-07-15 21:20:03.024686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.972 [2024-07-15 21:20:03.024905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.972 [2024-07-15 21:20:03.024913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.972 [2024-07-15 21:20:03.024921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.972 [2024-07-15 21:20:03.028470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.972 [2024-07-15 21:20:03.037678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.972 [2024-07-15 21:20:03.038436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.972 [2024-07-15 21:20:03.038473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.972 [2024-07-15 21:20:03.038484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.972 [2024-07-15 21:20:03.038723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.972 [2024-07-15 21:20:03.038946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.972 [2024-07-15 21:20:03.038959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.972 [2024-07-15 21:20:03.038966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.972 [2024-07-15 21:20:03.042527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.972 [2024-07-15 21:20:03.051532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.972 [2024-07-15 21:20:03.052201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.972 [2024-07-15 21:20:03.052246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.972 [2024-07-15 21:20:03.052258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.972 [2024-07-15 21:20:03.052497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.972 [2024-07-15 21:20:03.052721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.972 [2024-07-15 21:20:03.052729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.972 [2024-07-15 21:20:03.052736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.972 [2024-07-15 21:20:03.056296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.972 [2024-07-15 21:20:03.065520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.972 [2024-07-15 21:20:03.066227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.972 [2024-07-15 21:20:03.066270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.972 [2024-07-15 21:20:03.066281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.972 [2024-07-15 21:20:03.066520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.972 [2024-07-15 21:20:03.066742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.972 [2024-07-15 21:20:03.066750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.972 [2024-07-15 21:20:03.066758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.972 [2024-07-15 21:20:03.070313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.972 [2024-07-15 21:20:03.079314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.972 [2024-07-15 21:20:03.080021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.972 [2024-07-15 21:20:03.080058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.972 [2024-07-15 21:20:03.080068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.972 [2024-07-15 21:20:03.080319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.972 [2024-07-15 21:20:03.080543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.972 [2024-07-15 21:20:03.080551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.972 [2024-07-15 21:20:03.080559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.972 [2024-07-15 21:20:03.084182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.972 [2024-07-15 21:20:03.093282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.972 [2024-07-15 21:20:03.093992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.972 [2024-07-15 21:20:03.094029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.972 [2024-07-15 21:20:03.094040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.972 [2024-07-15 21:20:03.094288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.972 [2024-07-15 21:20:03.094511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.972 [2024-07-15 21:20:03.094519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.972 [2024-07-15 21:20:03.094526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.972 [2024-07-15 21:20:03.098078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.972 [2024-07-15 21:20:03.107081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.972 [2024-07-15 21:20:03.107790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.973 [2024-07-15 21:20:03.107827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.973 [2024-07-15 21:20:03.107838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.973 [2024-07-15 21:20:03.108077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.973 [2024-07-15 21:20:03.108309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.973 [2024-07-15 21:20:03.108318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.973 [2024-07-15 21:20:03.108326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.973 [2024-07-15 21:20:03.111879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.973 [2024-07-15 21:20:03.120880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.973 [2024-07-15 21:20:03.121606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.973 [2024-07-15 21:20:03.121643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.973 [2024-07-15 21:20:03.121653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.973 [2024-07-15 21:20:03.121893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.973 [2024-07-15 21:20:03.122116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.973 [2024-07-15 21:20:03.122124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.973 [2024-07-15 21:20:03.122131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.973 [2024-07-15 21:20:03.125693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.973 [2024-07-15 21:20:03.134691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.973 [2024-07-15 21:20:03.135310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.973 [2024-07-15 21:20:03.135347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.973 [2024-07-15 21:20:03.135359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.973 [2024-07-15 21:20:03.135604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.973 [2024-07-15 21:20:03.135827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.973 [2024-07-15 21:20:03.135835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.973 [2024-07-15 21:20:03.135843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.973 [2024-07-15 21:20:03.139405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.973 [2024-07-15 21:20:03.148613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.973 [2024-07-15 21:20:03.149269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.973 [2024-07-15 21:20:03.149305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.973 [2024-07-15 21:20:03.149316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.973 [2024-07-15 21:20:03.149750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.973 [2024-07-15 21:20:03.149974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.973 [2024-07-15 21:20:03.149982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.973 [2024-07-15 21:20:03.149990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.973 [2024-07-15 21:20:03.153550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.973 [2024-07-15 21:20:03.162573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.973 [2024-07-15 21:20:03.163222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.973 [2024-07-15 21:20:03.163266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.973 [2024-07-15 21:20:03.163278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.973 [2024-07-15 21:20:03.163519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.973 [2024-07-15 21:20:03.163742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.973 [2024-07-15 21:20:03.163749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.973 [2024-07-15 21:20:03.163757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.973 [2024-07-15 21:20:03.167319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.973 [2024-07-15 21:20:03.176528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.973 [2024-07-15 21:20:03.177193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.973 [2024-07-15 21:20:03.177237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.973 [2024-07-15 21:20:03.177249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.973 [2024-07-15 21:20:03.177489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.973 [2024-07-15 21:20:03.177712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.973 [2024-07-15 21:20:03.177720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.973 [2024-07-15 21:20:03.177731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.973 [2024-07-15 21:20:03.181287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.973 [2024-07-15 21:20:03.190501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.973 [2024-07-15 21:20:03.191141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.973 [2024-07-15 21:20:03.191178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.973 [2024-07-15 21:20:03.191190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.973 [2024-07-15 21:20:03.191442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.973 [2024-07-15 21:20:03.191665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.973 [2024-07-15 21:20:03.191674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.973 [2024-07-15 21:20:03.191681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.973 [2024-07-15 21:20:03.195237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.973 [2024-07-15 21:20:03.204448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.973 [2024-07-15 21:20:03.205101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.973 [2024-07-15 21:20:03.205137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.973 [2024-07-15 21:20:03.205149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.973 [2024-07-15 21:20:03.205400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.973 [2024-07-15 21:20:03.205624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.973 [2024-07-15 21:20:03.205633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.973 [2024-07-15 21:20:03.205640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.973 [2024-07-15 21:20:03.209195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.973 [2024-07-15 21:20:03.218410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.973 [2024-07-15 21:20:03.219072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.973 [2024-07-15 21:20:03.219108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.973 [2024-07-15 21:20:03.219119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.973 [2024-07-15 21:20:03.219367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.973 [2024-07-15 21:20:03.219591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.973 [2024-07-15 21:20:03.219599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.973 [2024-07-15 21:20:03.219607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.973 [2024-07-15 21:20:03.223161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.973 [2024-07-15 21:20:03.232374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.973 [2024-07-15 21:20:03.232955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.973 [2024-07-15 21:20:03.232972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.973 [2024-07-15 21:20:03.232980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.973 [2024-07-15 21:20:03.233199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.973 [2024-07-15 21:20:03.233424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.973 [2024-07-15 21:20:03.233433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.973 [2024-07-15 21:20:03.233442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.973 [2024-07-15 21:20:03.236988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.973 [2024-07-15 21:20:03.246198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.973 [2024-07-15 21:20:03.246806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.973 [2024-07-15 21:20:03.246821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.973 [2024-07-15 21:20:03.246829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.973 [2024-07-15 21:20:03.247048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:35.973 [2024-07-15 21:20:03.247273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.973 [2024-07-15 21:20:03.247281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.973 [2024-07-15 21:20:03.247288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.973 [2024-07-15 21:20:03.250837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.973 [2024-07-15 21:20:03.260045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.973 [2024-07-15 21:20:03.260614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.974 [2024-07-15 21:20:03.260630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:35.974 [2024-07-15 21:20:03.260637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:35.974 [2024-07-15 21:20:03.260856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.233 [2024-07-15 21:20:03.261074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.233 [2024-07-15 21:20:03.261084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.233 [2024-07-15 21:20:03.261091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.233 [2024-07-15 21:20:03.264640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.233 [2024-07-15 21:20:03.273846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.233 [2024-07-15 21:20:03.274503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.233 [2024-07-15 21:20:03.274540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.233 [2024-07-15 21:20:03.274551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.233 [2024-07-15 21:20:03.274795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.233 [2024-07-15 21:20:03.275018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.233 [2024-07-15 21:20:03.275027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.233 [2024-07-15 21:20:03.275034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.233 [2024-07-15 21:20:03.278595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.233 [2024-07-15 21:20:03.287817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.233 [2024-07-15 21:20:03.288354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.233 [2024-07-15 21:20:03.288391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.233 [2024-07-15 21:20:03.288404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.233 [2024-07-15 21:20:03.288646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.234 [2024-07-15 21:20:03.288870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.234 [2024-07-15 21:20:03.288879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.234 [2024-07-15 21:20:03.288887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.234 [2024-07-15 21:20:03.292450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.234 [2024-07-15 21:20:03.301665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.234 [2024-07-15 21:20:03.302458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.234 [2024-07-15 21:20:03.302495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.234 [2024-07-15 21:20:03.302506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.234 [2024-07-15 21:20:03.302745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.234 [2024-07-15 21:20:03.302968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.234 [2024-07-15 21:20:03.302976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.234 [2024-07-15 21:20:03.302984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.234 [2024-07-15 21:20:03.306544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.234 [2024-07-15 21:20:03.315549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.234 [2024-07-15 21:20:03.316270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.234 [2024-07-15 21:20:03.316306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.234 [2024-07-15 21:20:03.316317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.234 [2024-07-15 21:20:03.316557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.234 [2024-07-15 21:20:03.316780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.234 [2024-07-15 21:20:03.316789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.234 [2024-07-15 21:20:03.316801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.234 [2024-07-15 21:20:03.320364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.234 [2024-07-15 21:20:03.329368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.234 [2024-07-15 21:20:03.330067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.234 [2024-07-15 21:20:03.330103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.234 [2024-07-15 21:20:03.330114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.234 [2024-07-15 21:20:03.330361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.234 [2024-07-15 21:20:03.330584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.234 [2024-07-15 21:20:03.330593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.234 [2024-07-15 21:20:03.330600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.234 [2024-07-15 21:20:03.334154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.234 [2024-07-15 21:20:03.343387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.234 [2024-07-15 21:20:03.344119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.234 [2024-07-15 21:20:03.344155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.234 [2024-07-15 21:20:03.344167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.234 [2024-07-15 21:20:03.344415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.234 [2024-07-15 21:20:03.344639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.234 [2024-07-15 21:20:03.344647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.234 [2024-07-15 21:20:03.344655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.234 [2024-07-15 21:20:03.348208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.234 [2024-07-15 21:20:03.357228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.234 [2024-07-15 21:20:03.357824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.234 [2024-07-15 21:20:03.357841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.234 [2024-07-15 21:20:03.357849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.234 [2024-07-15 21:20:03.358069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.234 [2024-07-15 21:20:03.358296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.234 [2024-07-15 21:20:03.358304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.234 [2024-07-15 21:20:03.358311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.234 [2024-07-15 21:20:03.361871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.234 [2024-07-15 21:20:03.371086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.234 [2024-07-15 21:20:03.371748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.234 [2024-07-15 21:20:03.371789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.234 [2024-07-15 21:20:03.371800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.234 [2024-07-15 21:20:03.372039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.234 [2024-07-15 21:20:03.372270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.234 [2024-07-15 21:20:03.372279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.234 [2024-07-15 21:20:03.372287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.234 [2024-07-15 21:20:03.375843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.234 [2024-07-15 21:20:03.385061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.234 [2024-07-15 21:20:03.385773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.234 [2024-07-15 21:20:03.385810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.234 [2024-07-15 21:20:03.385820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.234 [2024-07-15 21:20:03.386059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.234 [2024-07-15 21:20:03.386290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.234 [2024-07-15 21:20:03.386299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.234 [2024-07-15 21:20:03.386306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.234 [2024-07-15 21:20:03.389862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.234 [2024-07-15 21:20:03.398877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.234 [2024-07-15 21:20:03.399586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.234 [2024-07-15 21:20:03.399624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.234 [2024-07-15 21:20:03.399634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.234 [2024-07-15 21:20:03.399874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.234 [2024-07-15 21:20:03.400097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.234 [2024-07-15 21:20:03.400105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.234 [2024-07-15 21:20:03.400113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.234 [2024-07-15 21:20:03.403673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.234 [2024-07-15 21:20:03.412681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.234 [2024-07-15 21:20:03.413273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.234 [2024-07-15 21:20:03.413297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.234 [2024-07-15 21:20:03.413305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.234 [2024-07-15 21:20:03.413530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.234 [2024-07-15 21:20:03.413755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.234 [2024-07-15 21:20:03.413763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.234 [2024-07-15 21:20:03.413771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.234 [2024-07-15 21:20:03.417325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.234 [2024-07-15 21:20:03.426538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.234 [2024-07-15 21:20:03.427140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.234 [2024-07-15 21:20:03.427156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.234 [2024-07-15 21:20:03.427164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.234 [2024-07-15 21:20:03.427388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.234 [2024-07-15 21:20:03.427608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.234 [2024-07-15 21:20:03.427615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.234 [2024-07-15 21:20:03.427622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.234 [2024-07-15 21:20:03.431166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.234 [2024-07-15 21:20:03.440378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.234 [2024-07-15 21:20:03.441069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.235 [2024-07-15 21:20:03.441106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.235 [2024-07-15 21:20:03.441117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.235 [2024-07-15 21:20:03.441363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.235 [2024-07-15 21:20:03.441587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.235 [2024-07-15 21:20:03.441595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.235 [2024-07-15 21:20:03.441603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.235 [2024-07-15 21:20:03.445155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.235 [2024-07-15 21:20:03.454376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.235 [2024-07-15 21:20:03.454956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.235 [2024-07-15 21:20:03.454973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.235 [2024-07-15 21:20:03.454981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.235 [2024-07-15 21:20:03.455201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.235 [2024-07-15 21:20:03.455425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.235 [2024-07-15 21:20:03.455433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.235 [2024-07-15 21:20:03.455440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.235 [2024-07-15 21:20:03.458993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.235 [2024-07-15 21:20:03.468212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.235 [2024-07-15 21:20:03.468781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.235 [2024-07-15 21:20:03.468797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.235 [2024-07-15 21:20:03.468805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.235 [2024-07-15 21:20:03.469023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.235 [2024-07-15 21:20:03.469247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.235 [2024-07-15 21:20:03.469255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.235 [2024-07-15 21:20:03.469262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.235 [2024-07-15 21:20:03.472806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.235 [2024-07-15 21:20:03.482017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.235 [2024-07-15 21:20:03.482712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.235 [2024-07-15 21:20:03.482750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.235 [2024-07-15 21:20:03.482760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.235 [2024-07-15 21:20:03.483000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.235 [2024-07-15 21:20:03.483223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.235 [2024-07-15 21:20:03.483239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.235 [2024-07-15 21:20:03.483247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.235 [2024-07-15 21:20:03.486802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.235 [2024-07-15 21:20:03.496016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.235 [2024-07-15 21:20:03.496603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.235 [2024-07-15 21:20:03.496621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.235 [2024-07-15 21:20:03.496629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.235 [2024-07-15 21:20:03.496849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.235 [2024-07-15 21:20:03.497068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.235 [2024-07-15 21:20:03.497076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.235 [2024-07-15 21:20:03.497083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.235 [2024-07-15 21:20:03.500636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.235 [2024-07-15 21:20:03.509842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.235 [2024-07-15 21:20:03.510472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.235 [2024-07-15 21:20:03.510489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.235 [2024-07-15 21:20:03.510500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.235 [2024-07-15 21:20:03.510720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.235 [2024-07-15 21:20:03.510939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.235 [2024-07-15 21:20:03.510946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.235 [2024-07-15 21:20:03.510953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.235 [2024-07-15 21:20:03.514505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.495 [2024-07-15 21:20:03.523711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.495 [2024-07-15 21:20:03.524455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.495 [2024-07-15 21:20:03.524492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.495 [2024-07-15 21:20:03.524502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.495 [2024-07-15 21:20:03.524742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.495 [2024-07-15 21:20:03.524965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.495 [2024-07-15 21:20:03.524973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.495 [2024-07-15 21:20:03.524981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.495 [2024-07-15 21:20:03.528541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.495 [2024-07-15 21:20:03.537546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.495 [2024-07-15 21:20:03.538169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.495 [2024-07-15 21:20:03.538187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.495 [2024-07-15 21:20:03.538195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.495 [2024-07-15 21:20:03.538420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.495 [2024-07-15 21:20:03.538640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.495 [2024-07-15 21:20:03.538647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.495 [2024-07-15 21:20:03.538654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.495 [2024-07-15 21:20:03.542201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.495 [2024-07-15 21:20:03.551412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.495 [2024-07-15 21:20:03.551859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.495 [2024-07-15 21:20:03.551875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.495 [2024-07-15 21:20:03.551883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.496 [2024-07-15 21:20:03.552101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.496 [2024-07-15 21:20:03.552327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.496 [2024-07-15 21:20:03.552339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.496 [2024-07-15 21:20:03.552347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.496 [2024-07-15 21:20:03.555892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.496 [2024-07-15 21:20:03.565321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.496 [2024-07-15 21:20:03.565821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.496 [2024-07-15 21:20:03.565836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.496 [2024-07-15 21:20:03.565843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.496 [2024-07-15 21:20:03.566061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.496 [2024-07-15 21:20:03.566285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.496 [2024-07-15 21:20:03.566294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.496 [2024-07-15 21:20:03.566301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.496 [2024-07-15 21:20:03.569845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.496 [2024-07-15 21:20:03.579261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.496 [2024-07-15 21:20:03.579867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.496 [2024-07-15 21:20:03.579882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.496 [2024-07-15 21:20:03.579889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.496 [2024-07-15 21:20:03.580108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.496 [2024-07-15 21:20:03.580331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.496 [2024-07-15 21:20:03.580339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.496 [2024-07-15 21:20:03.580346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.496 [2024-07-15 21:20:03.583890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.496 [2024-07-15 21:20:03.593098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.496 [2024-07-15 21:20:03.593666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.496 [2024-07-15 21:20:03.593681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.496 [2024-07-15 21:20:03.593688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.496 [2024-07-15 21:20:03.593907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.496 [2024-07-15 21:20:03.594126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.496 [2024-07-15 21:20:03.594133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.496 [2024-07-15 21:20:03.594140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.496 [2024-07-15 21:20:03.597690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.496 [2024-07-15 21:20:03.606911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.496 [2024-07-15 21:20:03.607456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.496 [2024-07-15 21:20:03.607472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.496 [2024-07-15 21:20:03.607480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.496 [2024-07-15 21:20:03.607698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.496 [2024-07-15 21:20:03.607917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.496 [2024-07-15 21:20:03.607925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.496 [2024-07-15 21:20:03.607932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.496 [2024-07-15 21:20:03.611479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.496 [2024-07-15 21:20:03.620896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.496 [2024-07-15 21:20:03.621633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.496 [2024-07-15 21:20:03.621670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.496 [2024-07-15 21:20:03.621681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.496 [2024-07-15 21:20:03.621921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.496 [2024-07-15 21:20:03.622144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.496 [2024-07-15 21:20:03.622152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.496 [2024-07-15 21:20:03.622159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.496 [2024-07-15 21:20:03.625721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.496 [2024-07-15 21:20:03.634729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.496 [2024-07-15 21:20:03.635348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.496 [2024-07-15 21:20:03.635386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.496 [2024-07-15 21:20:03.635397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.496 [2024-07-15 21:20:03.635639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.496 [2024-07-15 21:20:03.635862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.496 [2024-07-15 21:20:03.635872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.496 [2024-07-15 21:20:03.635880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.496 [2024-07-15 21:20:03.639443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.496 [2024-07-15 21:20:03.648656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.496 [2024-07-15 21:20:03.649319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.496 [2024-07-15 21:20:03.649356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.496 [2024-07-15 21:20:03.649368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.496 [2024-07-15 21:20:03.649613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.496 [2024-07-15 21:20:03.649836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.496 [2024-07-15 21:20:03.649844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.496 [2024-07-15 21:20:03.649852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.496 [2024-07-15 21:20:03.653414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.496 [2024-07-15 21:20:03.662639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.496 [2024-07-15 21:20:03.663338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.496 [2024-07-15 21:20:03.663376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.496 [2024-07-15 21:20:03.663388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.496 [2024-07-15 21:20:03.663631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.496 [2024-07-15 21:20:03.663854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.496 [2024-07-15 21:20:03.663863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.496 [2024-07-15 21:20:03.663870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.496 [2024-07-15 21:20:03.667433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.496 [2024-07-15 21:20:03.676433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.496 [2024-07-15 21:20:03.677044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.496 [2024-07-15 21:20:03.677061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.496 [2024-07-15 21:20:03.677069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.496 [2024-07-15 21:20:03.677293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.496 [2024-07-15 21:20:03.677513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.496 [2024-07-15 21:20:03.677520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.496 [2024-07-15 21:20:03.677527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.496 [2024-07-15 21:20:03.681076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.496 [2024-07-15 21:20:03.690291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.496 [2024-07-15 21:20:03.690950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.496 [2024-07-15 21:20:03.690987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.496 [2024-07-15 21:20:03.690998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.496 [2024-07-15 21:20:03.691246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.496 [2024-07-15 21:20:03.691469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.496 [2024-07-15 21:20:03.691478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.496 [2024-07-15 21:20:03.691490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.496 [2024-07-15 21:20:03.695046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.496 [2024-07-15 21:20:03.704267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.496 [2024-07-15 21:20:03.704933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.496 [2024-07-15 21:20:03.704970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.497 [2024-07-15 21:20:03.704980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.497 [2024-07-15 21:20:03.705219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.497 [2024-07-15 21:20:03.705449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.497 [2024-07-15 21:20:03.705458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.497 [2024-07-15 21:20:03.705466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.497 [2024-07-15 21:20:03.709021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.497 [2024-07-15 21:20:03.718239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.497 [2024-07-15 21:20:03.718826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.497 [2024-07-15 21:20:03.718843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.497 [2024-07-15 21:20:03.718851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.497 [2024-07-15 21:20:03.719070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.497 [2024-07-15 21:20:03.719296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.497 [2024-07-15 21:20:03.719304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.497 [2024-07-15 21:20:03.719311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.497 [2024-07-15 21:20:03.722858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.497 [2024-07-15 21:20:03.732106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.497 [2024-07-15 21:20:03.732771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.497 [2024-07-15 21:20:03.732808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.497 [2024-07-15 21:20:03.732818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.497 [2024-07-15 21:20:03.733058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.497 [2024-07-15 21:20:03.733288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.497 [2024-07-15 21:20:03.733297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.497 [2024-07-15 21:20:03.733305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.497 [2024-07-15 21:20:03.736857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.497 [2024-07-15 21:20:03.746072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.497 [2024-07-15 21:20:03.746668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.497 [2024-07-15 21:20:03.746687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.497 [2024-07-15 21:20:03.746695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.497 [2024-07-15 21:20:03.746915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.497 [2024-07-15 21:20:03.747134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.497 [2024-07-15 21:20:03.747143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.497 [2024-07-15 21:20:03.747150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.497 [2024-07-15 21:20:03.750702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.497 [2024-07-15 21:20:03.759909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.497 [2024-07-15 21:20:03.760470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.497 [2024-07-15 21:20:03.760486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.497 [2024-07-15 21:20:03.760494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.497 [2024-07-15 21:20:03.760713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.497 [2024-07-15 21:20:03.760931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.497 [2024-07-15 21:20:03.760939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.497 [2024-07-15 21:20:03.760946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.497 [2024-07-15 21:20:03.764505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.497 [2024-07-15 21:20:03.773716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.497 [2024-07-15 21:20:03.774349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.497 [2024-07-15 21:20:03.774364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.497 [2024-07-15 21:20:03.774372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.497 [2024-07-15 21:20:03.774591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.497 [2024-07-15 21:20:03.774809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.497 [2024-07-15 21:20:03.774817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.497 [2024-07-15 21:20:03.774824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.497 [2024-07-15 21:20:03.778372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.758 [2024-07-15 21:20:03.787583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.758 [2024-07-15 21:20:03.788191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.758 [2024-07-15 21:20:03.788205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.758 [2024-07-15 21:20:03.788212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.758 [2024-07-15 21:20:03.788439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.758 [2024-07-15 21:20:03.788658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.758 [2024-07-15 21:20:03.788666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.758 [2024-07-15 21:20:03.788673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.758 [2024-07-15 21:20:03.792215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.758 [2024-07-15 21:20:03.801464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.758 [2024-07-15 21:20:03.802026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.758 [2024-07-15 21:20:03.802041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.758 [2024-07-15 21:20:03.802048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.758 [2024-07-15 21:20:03.802271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.758 [2024-07-15 21:20:03.802491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.758 [2024-07-15 21:20:03.802499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.758 [2024-07-15 21:20:03.802505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.758 [2024-07-15 21:20:03.806049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.758 [2024-07-15 21:20:03.815264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.758 [2024-07-15 21:20:03.815918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.758 [2024-07-15 21:20:03.815955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.758 [2024-07-15 21:20:03.815966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.758 [2024-07-15 21:20:03.816205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.758 [2024-07-15 21:20:03.816436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.758 [2024-07-15 21:20:03.816446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.758 [2024-07-15 21:20:03.816453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.758 [2024-07-15 21:20:03.820005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.758 [2024-07-15 21:20:03.829220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.758 [2024-07-15 21:20:03.829891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.758 [2024-07-15 21:20:03.829928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.758 [2024-07-15 21:20:03.829939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.758 [2024-07-15 21:20:03.830178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.758 [2024-07-15 21:20:03.830409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.758 [2024-07-15 21:20:03.830418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.758 [2024-07-15 21:20:03.830430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.758 [2024-07-15 21:20:03.833984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.758 [2024-07-15 21:20:03.843201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.758 [2024-07-15 21:20:03.843656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.758 [2024-07-15 21:20:03.843674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.758 [2024-07-15 21:20:03.843682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.758 [2024-07-15 21:20:03.843902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.758 [2024-07-15 21:20:03.844120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.758 [2024-07-15 21:20:03.844128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.758 [2024-07-15 21:20:03.844135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.758 [2024-07-15 21:20:03.847688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.758 [2024-07-15 21:20:03.857107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.758 [2024-07-15 21:20:03.857773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.758 [2024-07-15 21:20:03.857811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.758 [2024-07-15 21:20:03.857821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.758 [2024-07-15 21:20:03.858061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.758 [2024-07-15 21:20:03.858291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.758 [2024-07-15 21:20:03.858300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.758 [2024-07-15 21:20:03.858307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.758 [2024-07-15 21:20:03.861868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.759 [2024-07-15 21:20:03.871081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.759 [2024-07-15 21:20:03.871776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.759 [2024-07-15 21:20:03.871814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.759 [2024-07-15 21:20:03.871824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.759 [2024-07-15 21:20:03.872064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.759 [2024-07-15 21:20:03.872296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.759 [2024-07-15 21:20:03.872305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.759 [2024-07-15 21:20:03.872312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.759 [2024-07-15 21:20:03.875865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.759 [2024-07-15 21:20:03.885081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.759 [2024-07-15 21:20:03.885776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.759 [2024-07-15 21:20:03.885817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.759 [2024-07-15 21:20:03.885828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.759 [2024-07-15 21:20:03.886067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.759 [2024-07-15 21:20:03.886298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.759 [2024-07-15 21:20:03.886307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.759 [2024-07-15 21:20:03.886315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.759 [2024-07-15 21:20:03.889873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.759 [2024-07-15 21:20:03.898918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.759 [2024-07-15 21:20:03.899505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.759 [2024-07-15 21:20:03.899523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.759 [2024-07-15 21:20:03.899531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.759 [2024-07-15 21:20:03.899751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.759 [2024-07-15 21:20:03.899970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.759 [2024-07-15 21:20:03.899977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.759 [2024-07-15 21:20:03.899984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.759 [2024-07-15 21:20:03.903540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.759 [2024-07-15 21:20:03.912762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.759 [2024-07-15 21:20:03.913354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.759 [2024-07-15 21:20:03.913371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.759 [2024-07-15 21:20:03.913378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.759 [2024-07-15 21:20:03.913597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.759 [2024-07-15 21:20:03.913816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.759 [2024-07-15 21:20:03.913824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.759 [2024-07-15 21:20:03.913831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.759 [2024-07-15 21:20:03.917386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.759 [2024-07-15 21:20:03.926604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.759 [2024-07-15 21:20:03.927212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.759 [2024-07-15 21:20:03.927227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.759 [2024-07-15 21:20:03.927241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.759 [2024-07-15 21:20:03.927459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.759 [2024-07-15 21:20:03.927683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.759 [2024-07-15 21:20:03.927690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.759 [2024-07-15 21:20:03.927697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.759 [2024-07-15 21:20:03.931250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.759 [2024-07-15 21:20:03.940475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.759 [2024-07-15 21:20:03.941128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.759 [2024-07-15 21:20:03.941165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.759 [2024-07-15 21:20:03.941176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.759 [2024-07-15 21:20:03.941424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.759 [2024-07-15 21:20:03.941647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.759 [2024-07-15 21:20:03.941656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.759 [2024-07-15 21:20:03.941663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.759 [2024-07-15 21:20:03.945222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.759 [2024-07-15 21:20:03.954455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.759 [2024-07-15 21:20:03.955161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.759 [2024-07-15 21:20:03.955197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.759 [2024-07-15 21:20:03.955209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.759 [2024-07-15 21:20:03.955459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.759 [2024-07-15 21:20:03.955683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.759 [2024-07-15 21:20:03.955691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.759 [2024-07-15 21:20:03.955699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.759 [2024-07-15 21:20:03.959260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.759 [2024-07-15 21:20:03.968290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.759 [2024-07-15 21:20:03.969002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.759 [2024-07-15 21:20:03.969039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.759 [2024-07-15 21:20:03.969050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.759 [2024-07-15 21:20:03.969297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.759 [2024-07-15 21:20:03.969521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.759 [2024-07-15 21:20:03.969529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.759 [2024-07-15 21:20:03.969536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.759 [2024-07-15 21:20:03.973100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.759 [2024-07-15 21:20:03.982112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.759 [2024-07-15 21:20:03.982738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.759 [2024-07-15 21:20:03.982756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.759 [2024-07-15 21:20:03.982764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.759 [2024-07-15 21:20:03.982983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.759 [2024-07-15 21:20:03.983202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.759 [2024-07-15 21:20:03.983209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.759 [2024-07-15 21:20:03.983216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.759 [2024-07-15 21:20:03.986769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.759 [2024-07-15 21:20:03.995981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.759 [2024-07-15 21:20:03.996522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.759 [2024-07-15 21:20:03.996538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.759 [2024-07-15 21:20:03.996546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.759 [2024-07-15 21:20:03.996765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.759 [2024-07-15 21:20:03.996984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.759 [2024-07-15 21:20:03.996992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.759 [2024-07-15 21:20:03.996998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.759 [2024-07-15 21:20:04.000548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.759 [2024-07-15 21:20:04.009964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.759 [2024-07-15 21:20:04.010620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.759 [2024-07-15 21:20:04.010656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.759 [2024-07-15 21:20:04.010667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.759 [2024-07-15 21:20:04.010906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.759 [2024-07-15 21:20:04.011129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.759 [2024-07-15 21:20:04.011137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.759 [2024-07-15 21:20:04.011145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2157130 Killed "${NVMF_APP[@]}" "$@" 00:29:36.760 21:20:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:36.760 21:20:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:36.760 21:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:36.760 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:36.760 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:36.760 [2024-07-15 21:20:04.014708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.760 21:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2158832 00:29:36.760 21:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2158832 00:29:36.760 21:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:36.760 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2158832 ']' 00:29:36.760 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.760 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:36.760 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.760 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:36.760 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:36.760 [2024-07-15 21:20:04.023926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.760 [2024-07-15 21:20:04.024594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.760 [2024-07-15 21:20:04.024632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.760 [2024-07-15 21:20:04.024644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.760 [2024-07-15 21:20:04.024885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.760 [2024-07-15 21:20:04.025108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.760 [2024-07-15 21:20:04.025117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.760 [2024-07-15 21:20:04.025125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.760 [2024-07-15 21:20:04.028689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.760 [2024-07-15 21:20:04.037902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.760 [2024-07-15 21:20:04.038491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.760 [2024-07-15 21:20:04.038509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:36.760 [2024-07-15 21:20:04.038517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:36.760 [2024-07-15 21:20:04.038737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:36.760 [2024-07-15 21:20:04.038956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.760 [2024-07-15 21:20:04.038964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.760 [2024-07-15 21:20:04.038971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.760 [2024-07-15 21:20:04.042519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.021 [2024-07-15 21:20:04.051766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.021 [2024-07-15 21:20:04.052498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.021 [2024-07-15 21:20:04.052535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.021 [2024-07-15 21:20:04.052546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.021 [2024-07-15 21:20:04.052790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.021 [2024-07-15 21:20:04.053013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.021 [2024-07-15 21:20:04.053021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.021 [2024-07-15 21:20:04.053029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.021 [2024-07-15 21:20:04.056589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.021 [2024-07-15 21:20:04.065603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.021 [2024-07-15 21:20:04.066279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.021 [2024-07-15 21:20:04.066317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.021 [2024-07-15 21:20:04.066328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.021 [2024-07-15 21:20:04.066567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.021 [2024-07-15 21:20:04.066790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.021 [2024-07-15 21:20:04.066798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.021 [2024-07-15 21:20:04.066806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.021 [2024-07-15 21:20:04.070368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.021 [2024-07-15 21:20:04.075118] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:29:37.021 [2024-07-15 21:20:04.075172] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.021 [2024-07-15 21:20:04.079576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.021 [2024-07-15 21:20:04.080167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.021 [2024-07-15 21:20:04.080185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.021 [2024-07-15 21:20:04.080193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.021 [2024-07-15 21:20:04.080420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.021 [2024-07-15 21:20:04.080639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.021 [2024-07-15 21:20:04.080648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.021 [2024-07-15 21:20:04.080655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.021 [2024-07-15 21:20:04.084203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.021 [2024-07-15 21:20:04.093413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.021 [2024-07-15 21:20:04.094113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.021 [2024-07-15 21:20:04.094150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.021 [2024-07-15 21:20:04.094161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.021 [2024-07-15 21:20:04.094413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.021 [2024-07-15 21:20:04.094637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.021 [2024-07-15 21:20:04.094645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.022 [2024-07-15 21:20:04.094653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.022 [2024-07-15 21:20:04.098205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.022 [2024-07-15 21:20:04.107211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.022 [2024-07-15 21:20:04.107906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.022 [2024-07-15 21:20:04.107944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.022 [2024-07-15 21:20:04.107955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.022 [2024-07-15 21:20:04.108194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.022 [2024-07-15 21:20:04.108427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.022 [2024-07-15 21:20:04.108436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.022 [2024-07-15 21:20:04.108444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.022 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.022 [2024-07-15 21:20:04.111996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.022 [2024-07-15 21:20:04.121096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.022 [2024-07-15 21:20:04.121783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.022 [2024-07-15 21:20:04.121820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.022 [2024-07-15 21:20:04.121831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.022 [2024-07-15 21:20:04.122070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.022 [2024-07-15 21:20:04.122300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.022 [2024-07-15 21:20:04.122309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.022 [2024-07-15 21:20:04.122317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.022 [2024-07-15 21:20:04.125870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.022 [2024-07-15 21:20:04.135081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.022 [2024-07-15 21:20:04.135808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.022 [2024-07-15 21:20:04.135845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.022 [2024-07-15 21:20:04.135856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.022 [2024-07-15 21:20:04.136095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.022 [2024-07-15 21:20:04.136325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.022 [2024-07-15 21:20:04.136334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.022 [2024-07-15 21:20:04.136346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.022 [2024-07-15 21:20:04.139902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.022 [2024-07-15 21:20:04.148901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.022 [2024-07-15 21:20:04.149593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.022 [2024-07-15 21:20:04.149630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.022 [2024-07-15 21:20:04.149641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.022 [2024-07-15 21:20:04.150081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.022 [2024-07-15 21:20:04.150314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.022 [2024-07-15 21:20:04.150323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.022 [2024-07-15 21:20:04.150331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.022 [2024-07-15 21:20:04.153883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.022 [2024-07-15 21:20:04.161724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:37.022 [2024-07-15 21:20:04.162892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.022 [2024-07-15 21:20:04.163473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.022 [2024-07-15 21:20:04.163510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.022 [2024-07-15 21:20:04.163521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.022 [2024-07-15 21:20:04.163760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.022 [2024-07-15 21:20:04.163983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.022 [2024-07-15 21:20:04.163992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.022 [2024-07-15 21:20:04.163999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.022 [2024-07-15 21:20:04.167566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.022 [2024-07-15 21:20:04.176815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.022 [2024-07-15 21:20:04.177527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.022 [2024-07-15 21:20:04.177564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.022 [2024-07-15 21:20:04.177575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.022 [2024-07-15 21:20:04.177815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.022 [2024-07-15 21:20:04.178037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.022 [2024-07-15 21:20:04.178046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.022 [2024-07-15 21:20:04.178053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.022 [2024-07-15 21:20:04.181614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.022 [2024-07-15 21:20:04.190615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.022 [2024-07-15 21:20:04.191309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.022 [2024-07-15 21:20:04.191347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.022 [2024-07-15 21:20:04.191359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.022 [2024-07-15 21:20:04.191602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.022 [2024-07-15 21:20:04.191825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.022 [2024-07-15 21:20:04.191833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.022 [2024-07-15 21:20:04.191841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.022 [2024-07-15 21:20:04.195404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.022 [2024-07-15 21:20:04.204614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.022 [2024-07-15 21:20:04.205273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.022 [2024-07-15 21:20:04.205298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.022 [2024-07-15 21:20:04.205306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.022 [2024-07-15 21:20:04.205531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.022 [2024-07-15 21:20:04.205751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.022 [2024-07-15 21:20:04.205759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.022 [2024-07-15 21:20:04.205767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.022 [2024-07-15 21:20:04.209325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.022 [2024-07-15 21:20:04.215033] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.022 [2024-07-15 21:20:04.215057] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.022 [2024-07-15 21:20:04.215064] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.022 [2024-07-15 21:20:04.215069] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.022 [2024-07-15 21:20:04.215073] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.022 [2024-07-15 21:20:04.215226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:37.022 [2024-07-15 21:20:04.215362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:37.022 [2024-07-15 21:20:04.215496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.022 [2024-07-15 21:20:04.218531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.022 [2024-07-15 21:20:04.219265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.022 [2024-07-15 21:20:04.219303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.022 [2024-07-15 21:20:04.219315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.022 [2024-07-15 21:20:04.219558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.022 [2024-07-15 21:20:04.219781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.022 [2024-07-15 21:20:04.219794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.022 [2024-07-15 21:20:04.219802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.022 [2024-07-15 21:20:04.223363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.022 [2024-07-15 21:20:04.232373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.022 [2024-07-15 21:20:04.233112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.022 [2024-07-15 21:20:04.233150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.022 [2024-07-15 21:20:04.233161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.022 [2024-07-15 21:20:04.233410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.022 [2024-07-15 21:20:04.233633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.023 [2024-07-15 21:20:04.233642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.023 [2024-07-15 21:20:04.233650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.023 [2024-07-15 21:20:04.237200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.023 [2024-07-15 21:20:04.246200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.023 [2024-07-15 21:20:04.246723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.023 [2024-07-15 21:20:04.246760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.023 [2024-07-15 21:20:04.246773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.023 [2024-07-15 21:20:04.247016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.023 [2024-07-15 21:20:04.247246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.023 [2024-07-15 21:20:04.247255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.023 [2024-07-15 21:20:04.247263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.023 [2024-07-15 21:20:04.250815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.023 [2024-07-15 21:20:04.260030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.023 [2024-07-15 21:20:04.260740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.023 [2024-07-15 21:20:04.260778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.023 [2024-07-15 21:20:04.260789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.023 [2024-07-15 21:20:04.261029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.023 [2024-07-15 21:20:04.261273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.023 [2024-07-15 21:20:04.261282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.023 [2024-07-15 21:20:04.261290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.023 [2024-07-15 21:20:04.264845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.023 [2024-07-15 21:20:04.273852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.023 [2024-07-15 21:20:04.274577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.023 [2024-07-15 21:20:04.274615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.023 [2024-07-15 21:20:04.274626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.023 [2024-07-15 21:20:04.274866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.023 [2024-07-15 21:20:04.275089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.023 [2024-07-15 21:20:04.275097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.023 [2024-07-15 21:20:04.275105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.023 [2024-07-15 21:20:04.278665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.023 [2024-07-15 21:20:04.287667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.023 [2024-07-15 21:20:04.288354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.023 [2024-07-15 21:20:04.288391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.023 [2024-07-15 21:20:04.288402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.023 [2024-07-15 21:20:04.288642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.023 [2024-07-15 21:20:04.288865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.023 [2024-07-15 21:20:04.288873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.023 [2024-07-15 21:20:04.288881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.023 [2024-07-15 21:20:04.292443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.023 [2024-07-15 21:20:04.301655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.023 [2024-07-15 21:20:04.302361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.023 [2024-07-15 21:20:04.302399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.023 [2024-07-15 21:20:04.302411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.023 [2024-07-15 21:20:04.302654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.023 [2024-07-15 21:20:04.302876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.023 [2024-07-15 21:20:04.302885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.023 [2024-07-15 21:20:04.302892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.023 [2024-07-15 21:20:04.306452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.284 [2024-07-15 21:20:04.315453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.284 [2024-07-15 21:20:04.316039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 21:20:04.316056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.284 [2024-07-15 21:20:04.316064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.284 [2024-07-15 21:20:04.316298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.284 [2024-07-15 21:20:04.316519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.284 [2024-07-15 21:20:04.316527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.284 [2024-07-15 21:20:04.316534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.284 [2024-07-15 21:20:04.320077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.284 [2024-07-15 21:20:04.329285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.284 [2024-07-15 21:20:04.329960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 21:20:04.329997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.284 [2024-07-15 21:20:04.330008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.284 [2024-07-15 21:20:04.330254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.284 [2024-07-15 21:20:04.330478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.284 [2024-07-15 21:20:04.330486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.284 [2024-07-15 21:20:04.330494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.284 [2024-07-15 21:20:04.334047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.284 [2024-07-15 21:20:04.343264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.284 [2024-07-15 21:20:04.343904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 21:20:04.343941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.284 [2024-07-15 21:20:04.343952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.284 [2024-07-15 21:20:04.344191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.284 [2024-07-15 21:20:04.344422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.284 [2024-07-15 21:20:04.344431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.284 [2024-07-15 21:20:04.344438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.284 [2024-07-15 21:20:04.347987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.284 [2024-07-15 21:20:04.357195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.284 [2024-07-15 21:20:04.357719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 21:20:04.357757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.284 [2024-07-15 21:20:04.357768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.284 [2024-07-15 21:20:04.358007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.284 [2024-07-15 21:20:04.358238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.284 [2024-07-15 21:20:04.358247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.284 [2024-07-15 21:20:04.358258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.284 [2024-07-15 21:20:04.361820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.284 [2024-07-15 21:20:04.371030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.284 [2024-07-15 21:20:04.371721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 21:20:04.371757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.284 [2024-07-15 21:20:04.371768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.284 [2024-07-15 21:20:04.372008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.284 [2024-07-15 21:20:04.372240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.284 [2024-07-15 21:20:04.372254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.284 [2024-07-15 21:20:04.372265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.284 [2024-07-15 21:20:04.375817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.284 [2024-07-15 21:20:04.385024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.284 [2024-07-15 21:20:04.385723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 21:20:04.385760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.284 [2024-07-15 21:20:04.385770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.284 [2024-07-15 21:20:04.386009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.284 [2024-07-15 21:20:04.386239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.284 [2024-07-15 21:20:04.386248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.284 [2024-07-15 21:20:04.386255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.284 [2024-07-15 21:20:04.389804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.284 [2024-07-15 21:20:04.399012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.284 [2024-07-15 21:20:04.399712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 21:20:04.399749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.284 [2024-07-15 21:20:04.399760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.284 [2024-07-15 21:20:04.400000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.284 [2024-07-15 21:20:04.400222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.284 [2024-07-15 21:20:04.400238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.284 [2024-07-15 21:20:04.400247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.284 [2024-07-15 21:20:04.403797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.284 [2024-07-15 21:20:04.413003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.284 [2024-07-15 21:20:04.413731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 21:20:04.413768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.284 [2024-07-15 21:20:04.413779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.284 [2024-07-15 21:20:04.414019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.284 [2024-07-15 21:20:04.414249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.284 [2024-07-15 21:20:04.414257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.284 [2024-07-15 21:20:04.414265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.284 [2024-07-15 21:20:04.417816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.284 [2024-07-15 21:20:04.426815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.284 [2024-07-15 21:20:04.427290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 21:20:04.427316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.284 [2024-07-15 21:20:04.427324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.284 [2024-07-15 21:20:04.427549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.284 [2024-07-15 21:20:04.427769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.285 [2024-07-15 21:20:04.427777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.285 [2024-07-15 21:20:04.427784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.285 [2024-07-15 21:20:04.431337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.285 [2024-07-15 21:20:04.440746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.285 [2024-07-15 21:20:04.441361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 21:20:04.441398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.285 [2024-07-15 21:20:04.441410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.285 [2024-07-15 21:20:04.441652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.285 [2024-07-15 21:20:04.441875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.285 [2024-07-15 21:20:04.441884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.285 [2024-07-15 21:20:04.441892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.285 [2024-07-15 21:20:04.445452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.285 [2024-07-15 21:20:04.454663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.285 [2024-07-15 21:20:04.455369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 21:20:04.455407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.285 [2024-07-15 21:20:04.455419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.285 [2024-07-15 21:20:04.455662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.285 [2024-07-15 21:20:04.455890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.285 [2024-07-15 21:20:04.455898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.285 [2024-07-15 21:20:04.455906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.285 [2024-07-15 21:20:04.459469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.285 [2024-07-15 21:20:04.468485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.285 [2024-07-15 21:20:04.469117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 21:20:04.469135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.285 [2024-07-15 21:20:04.469143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.285 [2024-07-15 21:20:04.469368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.285 [2024-07-15 21:20:04.469588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.285 [2024-07-15 21:20:04.469596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.285 [2024-07-15 21:20:04.469603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.285 [2024-07-15 21:20:04.473146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.285 [2024-07-15 21:20:04.482353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.285 [2024-07-15 21:20:04.482815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 21:20:04.482834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.285 [2024-07-15 21:20:04.482842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.285 [2024-07-15 21:20:04.483063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.285 [2024-07-15 21:20:04.483288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.285 [2024-07-15 21:20:04.483297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.285 [2024-07-15 21:20:04.483305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.285 [2024-07-15 21:20:04.486852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.285 [2024-07-15 21:20:04.496272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.285 [2024-07-15 21:20:04.496792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 21:20:04.496829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.285 [2024-07-15 21:20:04.496839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.285 [2024-07-15 21:20:04.497079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.285 [2024-07-15 21:20:04.497309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.285 [2024-07-15 21:20:04.497318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.285 [2024-07-15 21:20:04.497326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.285 [2024-07-15 21:20:04.500881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.285 [2024-07-15 21:20:04.510089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.285 [2024-07-15 21:20:04.510722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 21:20:04.510740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.285 [2024-07-15 21:20:04.510747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.285 [2024-07-15 21:20:04.510967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.285 [2024-07-15 21:20:04.511186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.285 [2024-07-15 21:20:04.511194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.285 [2024-07-15 21:20:04.511201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.285 [2024-07-15 21:20:04.514751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.285 [2024-07-15 21:20:04.523953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.285 [2024-07-15 21:20:04.524654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 21:20:04.524691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.285 [2024-07-15 21:20:04.524701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.285 [2024-07-15 21:20:04.524941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.285 [2024-07-15 21:20:04.525164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.285 [2024-07-15 21:20:04.525172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.285 [2024-07-15 21:20:04.525180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.285 [2024-07-15 21:20:04.528739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.285 [2024-07-15 21:20:04.537953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.285 [2024-07-15 21:20:04.538642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 21:20:04.538679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.285 [2024-07-15 21:20:04.538689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.285 [2024-07-15 21:20:04.538928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.285 [2024-07-15 21:20:04.539151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.285 [2024-07-15 21:20:04.539160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.285 [2024-07-15 21:20:04.539167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.285 [2024-07-15 21:20:04.542725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.285 [2024-07-15 21:20:04.551939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.285 [2024-07-15 21:20:04.552637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 21:20:04.552674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.285 [2024-07-15 21:20:04.552689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.285 [2024-07-15 21:20:04.552928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.285 [2024-07-15 21:20:04.553151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.285 [2024-07-15 21:20:04.553159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.285 [2024-07-15 21:20:04.553166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.285 [2024-07-15 21:20:04.556725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.285 [2024-07-15 21:20:04.565735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.285 [2024-07-15 21:20:04.566341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 21:20:04.566378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.285 [2024-07-15 21:20:04.566389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.285 [2024-07-15 21:20:04.566632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.285 [2024-07-15 21:20:04.566855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.285 [2024-07-15 21:20:04.566863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.285 [2024-07-15 21:20:04.566871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.285 [2024-07-15 21:20:04.570431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.546 [2024-07-15 21:20:04.579645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.546 [2024-07-15 21:20:04.580434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-07-15 21:20:04.580471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.546 [2024-07-15 21:20:04.580483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.546 [2024-07-15 21:20:04.580722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.546 [2024-07-15 21:20:04.580945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.546 [2024-07-15 21:20:04.580953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.546 [2024-07-15 21:20:04.580961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.546 [2024-07-15 21:20:04.584520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.546 [2024-07-15 21:20:04.593520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.546 [2024-07-15 21:20:04.594133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-07-15 21:20:04.594150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.546 [2024-07-15 21:20:04.594158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.547 [2024-07-15 21:20:04.594382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.547 [2024-07-15 21:20:04.594607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.547 [2024-07-15 21:20:04.594616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.547 [2024-07-15 21:20:04.594623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.547 [2024-07-15 21:20:04.598169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.547 [2024-07-15 21:20:04.607375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.547 [2024-07-15 21:20:04.607838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-07-15 21:20:04.607853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.547 [2024-07-15 21:20:04.607861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.547 [2024-07-15 21:20:04.608079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.547 [2024-07-15 21:20:04.608303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.547 [2024-07-15 21:20:04.608311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.547 [2024-07-15 21:20:04.608318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.547 [2024-07-15 21:20:04.611860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.547 [2024-07-15 21:20:04.621273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.547 [2024-07-15 21:20:04.621959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-07-15 21:20:04.621996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.547 [2024-07-15 21:20:04.622006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.547 [2024-07-15 21:20:04.622253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.547 [2024-07-15 21:20:04.622478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.547 [2024-07-15 21:20:04.622486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.547 [2024-07-15 21:20:04.622494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.547 [2024-07-15 21:20:04.626092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.547 [2024-07-15 21:20:04.635091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.547 [2024-07-15 21:20:04.635773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-07-15 21:20:04.635811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.547 [2024-07-15 21:20:04.635821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.547 [2024-07-15 21:20:04.636061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.547 [2024-07-15 21:20:04.636290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.547 [2024-07-15 21:20:04.636299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.547 [2024-07-15 21:20:04.636306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.547 [2024-07-15 21:20:04.639859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.547 [2024-07-15 21:20:04.649077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.547 [2024-07-15 21:20:04.649804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-07-15 21:20:04.649841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.547 [2024-07-15 21:20:04.649852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.547 [2024-07-15 21:20:04.650091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.547 [2024-07-15 21:20:04.650321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.547 [2024-07-15 21:20:04.650330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.547 [2024-07-15 21:20:04.650337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.547 [2024-07-15 21:20:04.653887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.547 [2024-07-15 21:20:04.662901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.547 [2024-07-15 21:20:04.663417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-07-15 21:20:04.663455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.547 [2024-07-15 21:20:04.663467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.547 [2024-07-15 21:20:04.663710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.547 [2024-07-15 21:20:04.663933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.547 [2024-07-15 21:20:04.663941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.547 [2024-07-15 21:20:04.663948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.547 [2024-07-15 21:20:04.667510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.547 [2024-07-15 21:20:04.676728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.547 [2024-07-15 21:20:04.677453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-07-15 21:20:04.677490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.547 [2024-07-15 21:20:04.677501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.547 [2024-07-15 21:20:04.677740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.547 [2024-07-15 21:20:04.677963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.547 [2024-07-15 21:20:04.677972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.547 [2024-07-15 21:20:04.677979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.547 [2024-07-15 21:20:04.681538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.547 [2024-07-15 21:20:04.690547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.547 [2024-07-15 21:20:04.691277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-07-15 21:20:04.691314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.547 [2024-07-15 21:20:04.691331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.547 [2024-07-15 21:20:04.691574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.547 [2024-07-15 21:20:04.691796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.547 [2024-07-15 21:20:04.691805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.547 [2024-07-15 21:20:04.691813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.547 [2024-07-15 21:20:04.695374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.547 [2024-07-15 21:20:04.704383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.547 [2024-07-15 21:20:04.704986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-07-15 21:20:04.705023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.547 [2024-07-15 21:20:04.705035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.547 [2024-07-15 21:20:04.705281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.547 [2024-07-15 21:20:04.705505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.547 [2024-07-15 21:20:04.705513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.547 [2024-07-15 21:20:04.705521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.547 [2024-07-15 21:20:04.709074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.547 [2024-07-15 21:20:04.718285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.547 [2024-07-15 21:20:04.718877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-07-15 21:20:04.718895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.547 [2024-07-15 21:20:04.718903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.547 [2024-07-15 21:20:04.719122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.547 [2024-07-15 21:20:04.719347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.547 [2024-07-15 21:20:04.719356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.547 [2024-07-15 21:20:04.719364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.547 [2024-07-15 21:20:04.722910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.547 [2024-07-15 21:20:04.732116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.547 [2024-07-15 21:20:04.732801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-07-15 21:20:04.732838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.547 [2024-07-15 21:20:04.732849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.547 [2024-07-15 21:20:04.733089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.547 [2024-07-15 21:20:04.733319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.547 [2024-07-15 21:20:04.733332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.547 [2024-07-15 21:20:04.733339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.547 [2024-07-15 21:20:04.736894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.547 [2024-07-15 21:20:04.746109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.547 [2024-07-15 21:20:04.746654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-07-15 21:20:04.746690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.547 [2024-07-15 21:20:04.746701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.548 [2024-07-15 21:20:04.746940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.548 [2024-07-15 21:20:04.747162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.548 [2024-07-15 21:20:04.747171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.548 [2024-07-15 21:20:04.747179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.548 [2024-07-15 21:20:04.750741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.548 [2024-07-15 21:20:04.759956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.548 [2024-07-15 21:20:04.760646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-07-15 21:20:04.760684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.548 [2024-07-15 21:20:04.760696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.548 [2024-07-15 21:20:04.760935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.548 [2024-07-15 21:20:04.761158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.548 [2024-07-15 21:20:04.761166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.548 [2024-07-15 21:20:04.761174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.548 [2024-07-15 21:20:04.764745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.548 [2024-07-15 21:20:04.773960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.548 [2024-07-15 21:20:04.774654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-07-15 21:20:04.774692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.548 [2024-07-15 21:20:04.774703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.548 [2024-07-15 21:20:04.774942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.548 [2024-07-15 21:20:04.775165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.548 [2024-07-15 21:20:04.775174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.548 [2024-07-15 21:20:04.775182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.548 [2024-07-15 21:20:04.778743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.548 [2024-07-15 21:20:04.787954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.548 [2024-07-15 21:20:04.788522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-07-15 21:20:04.788560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.548 [2024-07-15 21:20:04.788572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.548 [2024-07-15 21:20:04.788812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.548 [2024-07-15 21:20:04.789036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.548 [2024-07-15 21:20:04.789044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.548 [2024-07-15 21:20:04.789052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.548 [2024-07-15 21:20:04.792612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.548 [2024-07-15 21:20:04.801822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.548 [2024-07-15 21:20:04.802543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-07-15 21:20:04.802580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.548 [2024-07-15 21:20:04.802591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.548 [2024-07-15 21:20:04.802830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.548 [2024-07-15 21:20:04.803053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.548 [2024-07-15 21:20:04.803061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.548 [2024-07-15 21:20:04.803069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.548 [2024-07-15 21:20:04.806625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.548 [2024-07-15 21:20:04.815633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.548 [2024-07-15 21:20:04.816364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-07-15 21:20:04.816401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.548 [2024-07-15 21:20:04.816411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.548 [2024-07-15 21:20:04.816650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.548 [2024-07-15 21:20:04.816873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.548 [2024-07-15 21:20:04.816881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.548 [2024-07-15 21:20:04.816889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.548 [2024-07-15 21:20:04.820444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.548 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:37.548 [2024-07-15 21:20:04.829441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.548 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:37.548 [2024-07-15 21:20:04.829955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-07-15 21:20:04.829973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.548 [2024-07-15 21:20:04.829985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.548 21:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:37.548 [2024-07-15 21:20:04.830205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.548 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:37.548 [2024-07-15 21:20:04.830429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.548 [2024-07-15 21:20:04.830438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.548 [2024-07-15 21:20:04.830444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.548 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:37.548 [2024-07-15 21:20:04.833992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.809 [2024-07-15 21:20:04.843415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.809 [2024-07-15 21:20:04.843980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.809 [2024-07-15 21:20:04.843996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.809 [2024-07-15 21:20:04.844003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.809 [2024-07-15 21:20:04.844223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.809 [2024-07-15 21:20:04.844448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.809 [2024-07-15 21:20:04.844456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.809 [2024-07-15 21:20:04.844464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.809 [2024-07-15 21:20:04.848008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.809 [2024-07-15 21:20:04.857224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.809 [2024-07-15 21:20:04.857911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.809 [2024-07-15 21:20:04.857949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.809 [2024-07-15 21:20:04.857960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.809 [2024-07-15 21:20:04.858199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.809 [2024-07-15 21:20:04.858430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.809 [2024-07-15 21:20:04.858440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.809 [2024-07-15 21:20:04.858447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.809 [2024-07-15 21:20:04.862010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.809 21:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.809 21:20:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:37.809 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.809 [2024-07-15 21:20:04.871021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.809 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:37.809 [2024-07-15 21:20:04.871711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.809 [2024-07-15 21:20:04.871749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.809 [2024-07-15 21:20:04.871759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.809 [2024-07-15 21:20:04.871999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.809 [2024-07-15 21:20:04.872222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.809 [2024-07-15 21:20:04.872238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.809 [2024-07-15 21:20:04.872247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.809 [2024-07-15 21:20:04.874245] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.809 [2024-07-15 21:20:04.875801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.809 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.809 21:20:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:37.809 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.809 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:37.809 [2024-07-15 21:20:04.885009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.809 [2024-07-15 21:20:04.885643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.809 [2024-07-15 21:20:04.885660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.809 [2024-07-15 21:20:04.885668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.809 [2024-07-15 21:20:04.885888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.809 [2024-07-15 21:20:04.886106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.809 [2024-07-15 21:20:04.886114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.809 [2024-07-15 21:20:04.886121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.809 [2024-07-15 21:20:04.889668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.809 [2024-07-15 21:20:04.898871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.809 [2024-07-15 21:20:04.899451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.809 [2024-07-15 21:20:04.899466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.809 [2024-07-15 21:20:04.899474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.809 [2024-07-15 21:20:04.899693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.809 [2024-07-15 21:20:04.899912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.810 [2024-07-15 21:20:04.899920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.810 [2024-07-15 21:20:04.899927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.810 [2024-07-15 21:20:04.903479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.810 [2024-07-15 21:20:04.912685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.810 Malloc0 00:29:37.810 [2024-07-15 21:20:04.913371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.810 [2024-07-15 21:20:04.913408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.810 [2024-07-15 21:20:04.913421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.810 [2024-07-15 21:20:04.913664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.810 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.810 [2024-07-15 21:20:04.913887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.810 [2024-07-15 21:20:04.913897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.810 [2024-07-15 21:20:04.913906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.810 21:20:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:37.810 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.810 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:37.810 [2024-07-15 21:20:04.917467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.810 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.810 21:20:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:37.810 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.810 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:37.810 [2024-07-15 21:20:04.926681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.810 [2024-07-15 21:20:04.927444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.810 [2024-07-15 21:20:04.927482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.810 [2024-07-15 21:20:04.927493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.810 [2024-07-15 21:20:04.927732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.810 [2024-07-15 21:20:04.927956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.810 [2024-07-15 21:20:04.927964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.810 [2024-07-15 21:20:04.927972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.810 [2024-07-15 21:20:04.931533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.810 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.810 21:20:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:37.810 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.810 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:37.810 [2024-07-15 21:20:04.940542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.810 [2024-07-15 21:20:04.941238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.810 [2024-07-15 21:20:04.941275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21656e0 with addr=10.0.0.2, port=4420 00:29:37.810 [2024-07-15 21:20:04.941287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21656e0 is same with the state(5) to be set 00:29:37.810 [2024-07-15 21:20:04.941530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21656e0 (9): Bad file descriptor 00:29:37.810 [2024-07-15 21:20:04.941758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.810 [2024-07-15 21:20:04.941766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.810 [2024-07-15 21:20:04.941773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.810 [2024-07-15 21:20:04.944857] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.810 [2024-07-15 21:20:04.945332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.810 21:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.810 21:20:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2157511 00:29:37.810 [2024-07-15 21:20:04.954548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.810 [2024-07-15 21:20:05.003964] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:47.811 00:29:47.811 Latency(us) 00:29:47.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.811 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:47.811 Verification LBA range: start 0x0 length 0x4000 00:29:47.811 Nvme1n1 : 15.01 8405.60 32.83 9697.25 0.00 7044.73 788.48 18131.63 00:29:47.811 =================================================================================================================== 00:29:47.811 Total : 8405.60 32.83 9697.25 0.00 7044.73 788.48 18131.63 00:29:47.811 21:20:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:47.811 21:20:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:47.811 21:20:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.811 21:20:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:47.811 21:20:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:47.812 rmmod nvme_tcp 00:29:47.812 rmmod nvme_fabrics 00:29:47.812 rmmod nvme_keyring 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2158832 ']' 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2158832 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2158832 ']' 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2158832 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2158832 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2158832' 00:29:47.812 killing process with pid 2158832 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2158832 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2158832 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:47.812 21:20:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.756 21:20:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:48.756 00:29:48.756 real 0m28.751s 00:29:48.756 user 1m3.410s 00:29:48.756 sys 0m7.788s 00:29:48.756 21:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:48.756 21:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:48.756 ************************************ 00:29:48.756 END TEST nvmf_bdevperf 00:29:48.756 ************************************ 00:29:48.756 21:20:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:48.756 21:20:15 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:48.756 21:20:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:48.756 21:20:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:48.756 21:20:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:48.756 ************************************ 00:29:48.756 START TEST nvmf_target_disconnect 00:29:48.756 ************************************ 00:29:48.756 21:20:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:48.756 * Looking for test storage... 00:29:48.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:48.756 21:20:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.756 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:48.756 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.756 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.756 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.756 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.756 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.756 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.756 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.756 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.756 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.756 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.756 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:48.756 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:48.756 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:49.019 21:20:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:57.163 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:57.163 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:57.163 Found net devices under 0000:31:00.0: cvl_0_0 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:57.163 Found net devices under 0000:31:00.1: cvl_0_1 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:57.163 21:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:57.163 21:20:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:57.163 21:20:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:57.163 21:20:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:57.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:29:57.163 00:29:57.163 --- 10.0.0.2 ping statistics --- 00:29:57.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.164 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:57.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:29:57.164 00:29:57.164 --- 10.0.0.1 ping statistics --- 00:29:57.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.164 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:57.164 ************************************ 00:29:57.164 START TEST nvmf_target_disconnect_tc1 00:29:57.164 ************************************ 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:57.164 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.164 [2024-07-15 21:20:24.295450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-07-15 21:20:24.295523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfde650 with addr=10.0.0.2, port=4420 00:29:57.164 [2024-07-15 21:20:24.295554] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:57.164 [2024-07-15 21:20:24.295565] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:57.164 [2024-07-15 21:20:24.295573] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:57.164 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:57.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:57.164 Initializing NVMe Controllers 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:57.164 00:29:57.164 real 0m0.121s 00:29:57.164 user 0m0.042s 00:29:57.164 sys 0m0.079s 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:57.164 ************************************ 00:29:57.164 END TEST nvmf_target_disconnect_tc1 00:29:57.164 ************************************ 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:57.164 ************************************ 00:29:57.164 START TEST nvmf_target_disconnect_tc2 00:29:57.164 ************************************ 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2165248 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2165248 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2165248 ']' 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:57.164 21:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:57.164 [2024-07-15 21:20:24.450265] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:29:57.164 [2024-07-15 21:20:24.450322] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.481 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.481 [2024-07-15 21:20:24.543854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:57.481 [2024-07-15 21:20:24.638678] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.481 [2024-07-15 21:20:24.638734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.481 [2024-07-15 21:20:24.638742] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.481 [2024-07-15 21:20:24.638750] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.481 [2024-07-15 21:20:24.638756] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.481 [2024-07-15 21:20:24.638939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:57.481 [2024-07-15 21:20:24.639096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:57.481 [2024-07-15 21:20:24.639227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:57.481 [2024-07-15 21:20:24.639228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:58.050 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:58.050 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:58.050 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:58.051 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:58.051 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.051 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.051 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:58.051 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.051 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.051 Malloc0 00:29:58.051 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.051 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:58.051 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.051 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.051 [2024-07-15 21:20:25.325366] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.051 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.051 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.051 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.051 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.311 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.311 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:58.311 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.311 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.311 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.311 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.311 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.311 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.311 [2024-07-15 21:20:25.365736] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.311 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.311 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:58.311 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.311 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.311 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.311 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2165584 00:29:58.311 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:58.311 21:20:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:58.311 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.225 21:20:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2165248 00:30:00.225 21:20:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Write completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Write completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Write completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Write completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Write completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Write completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Write completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Write completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.225 Read completed with error (sct=0, sc=8) 00:30:00.225 starting I/O failed 00:30:00.226 Read completed with error (sct=0, sc=8) 00:30:00.226 starting I/O failed 00:30:00.226 Read completed with error (sct=0, sc=8) 00:30:00.226 starting I/O failed 00:30:00.226 [2024-07-15 21:20:27.406955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.226 [2024-07-15 21:20:27.407524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.407565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.407702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.407717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.408094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.408107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.408562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.408602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.408981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.408993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.409212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.409222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.409562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.409572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.409844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.409854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.410007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.410017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.410260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.410270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.410578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.410589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.410784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.410795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.411137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.411147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.411530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.411541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.411794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.411804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.412170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.412180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.412443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.412453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.412833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.412843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.413171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.413181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.413400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.413410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.413766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.413776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.414096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.414106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.414359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.414369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.414747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.414758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.414965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.414975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.415330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.415340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.415679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.415689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.415852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.415863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.416258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.416268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.416756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.416766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.416976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.416987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.417301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.417311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.417673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.417683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.418050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.418059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.418425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.418435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.418813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.418823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.419030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.419041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.419393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.419404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.419767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.226 [2024-07-15 21:20:27.419777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.226 qpair failed and we were unable to recover it. 00:30:00.226 [2024-07-15 21:20:27.420144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.420155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.420493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.420503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.420823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.420834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.421206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.421216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.421473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.421486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.421743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.421753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.422090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.422100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.422418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.422428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.422631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.422640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.422893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.422902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.423260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.423270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.423584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.423593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.423945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.423955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.424272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.424282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.424671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.424680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.424963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.424972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.425316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.425325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.425619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.425630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.425962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.425972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.426207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.426216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.426483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.426493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.426777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.426786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.427041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.427050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.427362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.427371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.427702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.427711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.428015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.428024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.428422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.428432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.428805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.428815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.429173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.429183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.429458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.429468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.429800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.429810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.430135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.430145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.430469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.430480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.430809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.430819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.431119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.431129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.431459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.431469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.431828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.431838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.432189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.432199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.432506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.432516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.432876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.432886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.227 [2024-07-15 21:20:27.433228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.227 [2024-07-15 21:20:27.433242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.227 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.433575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.433585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.433950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.433959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.434274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.434285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.434640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.434653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.434973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.434983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.435309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.435320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.435614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.435624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.435966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.435976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.436299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.436309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.436607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.436617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.436958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.436969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.437322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.437332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.437667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.437677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.438035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.438045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.438382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.438392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.438731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.438741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.439060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.439070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.439432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.439443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.439734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.439744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.440087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.440097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.440508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.440519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.440709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.440723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.441053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.441066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.441261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.441275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.441522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.441535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.441925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.441938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.442278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.442292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.442621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.442634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.443038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.443050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.443368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.443382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.443648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.443664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.444025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.444037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.444273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.444286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.444649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.444662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.445020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.445032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.445283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.445296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.445623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.445636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.445844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.445857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.446184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.446196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.228 [2024-07-15 21:20:27.446575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.228 [2024-07-15 21:20:27.446588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.228 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.446959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.446972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.447321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.447334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.447691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.447704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.447970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.447983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.448321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.448335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.448692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.448705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.449075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.449088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.449428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.449441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.449776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.449789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.450131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.450143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.450489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.450502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.450821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.450833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.451195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.451207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.451534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.451548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.451869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.451881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.452236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.452250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.452576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.452588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.452946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.452959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.453354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.453368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.453679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.453692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.454016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.454029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.454347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.454359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.454600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.454620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.454969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.454986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.455296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.455314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.455759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.455776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.456139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.456157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.456519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.456538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.456774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.456793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.457163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.457181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.457518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.457541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.457900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.457918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.458263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.458282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.458634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.458652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.459017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.459035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.459377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.459395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.459752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.459771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.460155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.229 [2024-07-15 21:20:27.460173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.229 qpair failed and we were unable to recover it. 00:30:00.229 [2024-07-15 21:20:27.460510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.460528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.460871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.460888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.461306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.461325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.461722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.461740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.462101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.462119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.462485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.462504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.462737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.462755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.463009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.463029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.463419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.463437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.463845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.463863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.464235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.464254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.464591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.464610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.464904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.464925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.465304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.465329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.465662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.465686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.466069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.466092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.466454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.466480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.466823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.466847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.467296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.467321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.467685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.467709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.468057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.468082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.468434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.468459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.468826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.468850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.469193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.469216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.469562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.469586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.469956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.469981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.470347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.470372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.470747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.470771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.471155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.471179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.471486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.471511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.471890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.471914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.472315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.472340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.472688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.472727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.473081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.473105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.473483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.473507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.473888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.230 [2024-07-15 21:20:27.473912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.230 qpair failed and we were unable to recover it. 00:30:00.230 [2024-07-15 21:20:27.474295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.474320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.474714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.474738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.475123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.475147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.475381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.475406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.475759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.475784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.476105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.476128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.476510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.476538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.476858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.476885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.477285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.477313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.477675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.477702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.477991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.478018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.478301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.478332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.478639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.478666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.479055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.479081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.479533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.479562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.479915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.479942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.480327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.480354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.480749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.480775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.481106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.481134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.481380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.481410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.481761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.481788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.482161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.482187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.482588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.482617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.482986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.483013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.483386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.483415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.483775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.483802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.484185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.484212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.484579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.484607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.484990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.485016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.485422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.485449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.485899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.485926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.486313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.486340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.486609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.486645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.487023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.487050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.487457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.487485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.487760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.487786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.488147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.488181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.488557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.488585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.488993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.489020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.489400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.489427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.489817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.231 [2024-07-15 21:20:27.489845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.231 qpair failed and we were unable to recover it. 00:30:00.231 [2024-07-15 21:20:27.490245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.490273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.490668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.490694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.491081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.491108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.491544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.491573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.491941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.491968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.492346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.492374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.492752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.492779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.492953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.492979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.493378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.493405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.493800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.493827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.494158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.494185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.494573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.494601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.494976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.495002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.495287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.495318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.495711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.495739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.496131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.496158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.496422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.496449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.496841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.496867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.497242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.497270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.497649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.497676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.497914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.497943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.498334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.498362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.498754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.498781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.499181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.499207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.499550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.499577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.500018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.500045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.500410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.500438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.500818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.500845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.501218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.501256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.501677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.501704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.502071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.502097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.502569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.502596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.502960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.502986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.503348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.503376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.503758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.503785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.504133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.504166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.504551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.504579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.504950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.504977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.505362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.505389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.505769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.505797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.232 [2024-07-15 21:20:27.506154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.232 [2024-07-15 21:20:27.506181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.232 qpair failed and we were unable to recover it. 00:30:00.233 [2024-07-15 21:20:27.506608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.233 [2024-07-15 21:20:27.506636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.233 qpair failed and we were unable to recover it. 00:30:00.233 [2024-07-15 21:20:27.506888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.233 [2024-07-15 21:20:27.506916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.233 qpair failed and we were unable to recover it. 00:30:00.233 [2024-07-15 21:20:27.507308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.233 [2024-07-15 21:20:27.507336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.233 qpair failed and we were unable to recover it. 00:30:00.233 [2024-07-15 21:20:27.507695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.233 [2024-07-15 21:20:27.507722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.233 qpair failed and we were unable to recover it. 00:30:00.233 [2024-07-15 21:20:27.508112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.233 [2024-07-15 21:20:27.508139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.233 qpair failed and we were unable to recover it. 00:30:00.233 [2024-07-15 21:20:27.508562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.233 [2024-07-15 21:20:27.508590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.233 qpair failed and we were unable to recover it. 00:30:00.233 [2024-07-15 21:20:27.508863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.233 [2024-07-15 21:20:27.508890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.233 qpair failed and we were unable to recover it. 00:30:00.233 [2024-07-15 21:20:27.509281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.233 [2024-07-15 21:20:27.509309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.233 qpair failed and we were unable to recover it. 00:30:00.233 [2024-07-15 21:20:27.509702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.233 [2024-07-15 21:20:27.509729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.233 qpair failed and we were unable to recover it. 00:30:00.233 [2024-07-15 21:20:27.510118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.233 [2024-07-15 21:20:27.510145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.233 qpair failed and we were unable to recover it. 00:30:00.233 [2024-07-15 21:20:27.510582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.233 [2024-07-15 21:20:27.510610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.233 qpair failed and we were unable to recover it. 00:30:00.233 [2024-07-15 21:20:27.510848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.233 [2024-07-15 21:20:27.510876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.233 qpair failed and we were unable to recover it. 00:30:00.503 [2024-07-15 21:20:27.511263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.503 [2024-07-15 21:20:27.511292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.503 qpair failed and we were unable to recover it. 00:30:00.503 [2024-07-15 21:20:27.511712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.503 [2024-07-15 21:20:27.511740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.503 qpair failed and we were unable to recover it. 00:30:00.503 [2024-07-15 21:20:27.512121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.503 [2024-07-15 21:20:27.512148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.503 qpair failed and we were unable to recover it. 00:30:00.503 [2024-07-15 21:20:27.512392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.503 [2024-07-15 21:20:27.512423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.503 qpair failed and we were unable to recover it. 00:30:00.503 [2024-07-15 21:20:27.512814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.503 [2024-07-15 21:20:27.512841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.503 qpair failed and we were unable to recover it. 00:30:00.503 [2024-07-15 21:20:27.513245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.503 [2024-07-15 21:20:27.513272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.503 qpair failed and we were unable to recover it. 00:30:00.503 [2024-07-15 21:20:27.513648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.503 [2024-07-15 21:20:27.513675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.503 qpair failed and we were unable to recover it. 00:30:00.503 [2024-07-15 21:20:27.514077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.503 [2024-07-15 21:20:27.514104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.503 qpair failed and we were unable to recover it. 00:30:00.503 [2024-07-15 21:20:27.514493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.503 [2024-07-15 21:20:27.514521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.503 qpair failed and we were unable to recover it. 00:30:00.503 [2024-07-15 21:20:27.514905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.503 [2024-07-15 21:20:27.514932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.503 qpair failed and we were unable to recover it. 00:30:00.503 [2024-07-15 21:20:27.515301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.503 [2024-07-15 21:20:27.515329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.503 qpair failed and we were unable to recover it. 00:30:00.503 [2024-07-15 21:20:27.515789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.503 [2024-07-15 21:20:27.515816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.503 qpair failed and we were unable to recover it. 00:30:00.503 [2024-07-15 21:20:27.516189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.503 [2024-07-15 21:20:27.516216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.503 qpair failed and we were unable to recover it. 00:30:00.503 [2024-07-15 21:20:27.516613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.503 [2024-07-15 21:20:27.516640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.503 qpair failed and we were unable to recover it. 00:30:00.503 [2024-07-15 21:20:27.517013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.503 [2024-07-15 21:20:27.517042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.503 qpair failed and we were unable to recover it. 00:30:00.503 [2024-07-15 21:20:27.517423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.517452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.517782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.517809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.518192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.518219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.518589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.518617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.518987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.519014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.519389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.519418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.519797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.519824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.520215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.520272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.520667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.520694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.521075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.521102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.521391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.521418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.521789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.521816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.522210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.522246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.522619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.522645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.523013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.523039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.523316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.523347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.523724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.523750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.524104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.524131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.524544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.524572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.524937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.524964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.525347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.525376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.525728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.525754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.526118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.526144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.526533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.526560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.526837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.526862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.527266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.527296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.527572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.527599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.527995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.528021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.528394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.528422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.528748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.528773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.529197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.529224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.529551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.529579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.529894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.529920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.530138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.530166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.530579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.530608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.530789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.530816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.531116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.531142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.531558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.531586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.531961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.531988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.532381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.532408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.532624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.504 [2024-07-15 21:20:27.532652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.504 qpair failed and we were unable to recover it. 00:30:00.504 [2024-07-15 21:20:27.532993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.533020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.533325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.533353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.533709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.533735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.534111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.534137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.534562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.534589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.534985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.535012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.535377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.535411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.535814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.535842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.536214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.536249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.536617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.536644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.536991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.537026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.537397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.537424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.537810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.537837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.538117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.538147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.538475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.538503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.538893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.538919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.539296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.539325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.539724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.539751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.540132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.540158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.540554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.540581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.540949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.540976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.541352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.541380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.541756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.541782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.542039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.542067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.542339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.542366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.542732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.542759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.543039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.543065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.543354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.543386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.543749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.543776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.544038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.544067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.544434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.544462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.544864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.544890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.545270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.545297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.545685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.545712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.546084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.546111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.546549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.546577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.546947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.546974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.547223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.547262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.547646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.547673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.548111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.548139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.505 qpair failed and we were unable to recover it. 00:30:00.505 [2024-07-15 21:20:27.548502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.505 [2024-07-15 21:20:27.548530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.548902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.548930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.549318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.549346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.549739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.549766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.550132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.550160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.550559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.550586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.550958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.550991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.551343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.551372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.551781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.551808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.552172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.552198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.552619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.552647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.553031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.553058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.553498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.553526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.553799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.553824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.554160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.554187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.554605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.554633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.555015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.555042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.555426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.555454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.555841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.555868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.556254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.556282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.556743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.556770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.557041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.557074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.557458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.557487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.557860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.557886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.558285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.558312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.558800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.558827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.559206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.559241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.559516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.559542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.559819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.559849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.560203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.560237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.560668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.560695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.561086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.561113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.561488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.561516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.561883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.561911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.562325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.562352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.562766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.562793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.563182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.563208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.563615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.563643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.563975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.564001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.564408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.564436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.564817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.564843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.506 qpair failed and we were unable to recover it. 00:30:00.506 [2024-07-15 21:20:27.565222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.506 [2024-07-15 21:20:27.565257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.565616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.565643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.565999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.566025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.566413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.566441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.566843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.566870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.567260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.567294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.567646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.567673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.568037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.568063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.568438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.568467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.568827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.568854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.569304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.569331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.569708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.569735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.570109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.570135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.570501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.570529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.570948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.570975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.571350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.571378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.571744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.571771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.572155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.572182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.572542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.572569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.572938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.572965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.573358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.573385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.573758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.573785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.574138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.574164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.574563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.574591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.574964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.574991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.575277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.575308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.575705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.575732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.576108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.576134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.576548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.576577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.576960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.576986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.577360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.577388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.577790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.577817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.578210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.578247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.578613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.507 [2024-07-15 21:20:27.578640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.507 qpair failed and we were unable to recover it. 00:30:00.507 [2024-07-15 21:20:27.579018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.579045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.579421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.579450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.579893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.579920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.580285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.580313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.580707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.580734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.581119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.581146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.581511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.581538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.581893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.581920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.582288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.582316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.582656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.582684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.583087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.583114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.583355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.583391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.583758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.583785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.584222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.584258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.584656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.584682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.584953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.584979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.585371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.585398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.585786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.585813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.586260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.586288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.586694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.586721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.587184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.587210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.587607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.587636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.587899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.587926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.588299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.588327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.588654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.588682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.589094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.589121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.589485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.589512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.589899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.589926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.590330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.590357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.590783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.590810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.591173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.591200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.591564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.591592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.591843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.591868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.592251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.592280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.592671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.592698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.593071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.593097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.593483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.593511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.593678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.593708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.594114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.594143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.594419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.508 [2024-07-15 21:20:27.594446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.508 qpair failed and we were unable to recover it. 00:30:00.508 [2024-07-15 21:20:27.594834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.594861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.595284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.595311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.595703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.595729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.596087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.596114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.596386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.596414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.596798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.596824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.597204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.597239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.597621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.597647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.598032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.598059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.598428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.598456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.598813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.598839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.599216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.599268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.599517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.599545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.599939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.599965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.600339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.600369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.600668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.600694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.601068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.601095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.601474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.601502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.601876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.601903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.602300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.602328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.602548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.602575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.602957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.602983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.603356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.603384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.603781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.603809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.604194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.604221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.604617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.604646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.605006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.605033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.605426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.605454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.605826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.605853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.606252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.606280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.606660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.606687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.606926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.606953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.607331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.607359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.607747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.607774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.608039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.608066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.608443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.608471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.608888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.608915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.609166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.609195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.609600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.609629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.609855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.609885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.610269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.509 [2024-07-15 21:20:27.610297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.509 qpair failed and we were unable to recover it. 00:30:00.509 [2024-07-15 21:20:27.610691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.610718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.610975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.611002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.611365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.611393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.611776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.611803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.612196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.612223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.612604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.612631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.613060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.613087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.613525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.613553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.613866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.613892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.614132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.614161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.614518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.614552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.614764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.614791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.615180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.615207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.615588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.615616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.616039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.616067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.616446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.616473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.616848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.616875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.617212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.617247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.617600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.617627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.618005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.618031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.618450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.618478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.618854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.618881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.619285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.619313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.619704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.619731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.620077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.620105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.620534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.620562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.620797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.620823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.621192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.621219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.621622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.621649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.622028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.622055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.622470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.622498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.622867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.622894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.623283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.623312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.623666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.623692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.624079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.624106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.624470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.624497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.624848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.624875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.625195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.625222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.625607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.625634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.625964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.625991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.626378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.510 [2024-07-15 21:20:27.626406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.510 qpair failed and we were unable to recover it. 00:30:00.510 [2024-07-15 21:20:27.626787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.626814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.627259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.627287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.627667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.627695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.628062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.628088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.628361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.628388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.628795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.628822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.629203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.629237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.629606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.629633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.630017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.630044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.630430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.630464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.630830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.630857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.631227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.631268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.631632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.631658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.632060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.632088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.632484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.632512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.632774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.632801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.633170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.633196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.633532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.633560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.633923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.633949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.634315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.634343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.634740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.634767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.635173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.635199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.635562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.635590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.635976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.636003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.636386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.636413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.636843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.636869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.637253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.637281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.637768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.637794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.637964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.637993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.638327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.638356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.638738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.638765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.639060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.639089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.639449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.639478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.639909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.639937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.640315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.640343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.640733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.640760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.640965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.640991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.641354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.641382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.641654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.641680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.642068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.642095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.511 qpair failed and we were unable to recover it. 00:30:00.511 [2024-07-15 21:20:27.642382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.511 [2024-07-15 21:20:27.642410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.642820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.642847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.643236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.643264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.643515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.643543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.643803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.643832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.644254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.644281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.644697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.644724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.645069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.645096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.645459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.645486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.645847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.645873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.646322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.646350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.646722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.646749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.647142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.647168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.647541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.647570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.647970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.647996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.648395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.648423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.648799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.648827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.649103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.649130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.649555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.649583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.649867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.649897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.650285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.650312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.650700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.650727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.651089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.651116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.651495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.651522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.651895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.651922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.652337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.652364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.652738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.652765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.653174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.653203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.653631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.653659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.654054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.654080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.654486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.654515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.654784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.654813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.655185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.655213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.655613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.655642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.656029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.656055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.512 [2024-07-15 21:20:27.656410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.512 [2024-07-15 21:20:27.656439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.512 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.656892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.656929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.657260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.657287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.657648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.657675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.657960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.657986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.658382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.658410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.658684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.658710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.659086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.659113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.659397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.659427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.659775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.659802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.660060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.660086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.660457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.660484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.660850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.660876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.661257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.661284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.661596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.661624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.662010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.662037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.662466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.662494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.662858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.662885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.663304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.663332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.663708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.663735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.664146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.664173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.664625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.664653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.664927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.664953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.665325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.665352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.665732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.665759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.666139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.666166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.666436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.666463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.666790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.666818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.667182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.667208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.667637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.667665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.668053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.668080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.668473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.668500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.668922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.668949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.669328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.669355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.669837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.669863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.670228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.670265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.670629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.670656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.671039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.671067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.671417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.671445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.671910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.671936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.672223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.672260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.513 [2024-07-15 21:20:27.672436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.513 [2024-07-15 21:20:27.672468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.513 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.672826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.672852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.673227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.673263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.673640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.673667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.674050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.674076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.674465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.674493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.674776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.674805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.675220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.675269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.675627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.675654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.676034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.676060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.676439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.676468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.676852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.676880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.677263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.677290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.677685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.677712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.678085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.678112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.678488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.678515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.678767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.678793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.679176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.679203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.679658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.679686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.680058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.680085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.680451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.680479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.680870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.680897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.681275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.681302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.681660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.681686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.682112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.682139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.682411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.682439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.682831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.682858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.683247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.683275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.683725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.683753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.684020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.684046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.684480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.684509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.684842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.684870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.685253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.685281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.685569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.685600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.685976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.686003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.686382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.686409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.686793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.686819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.687202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.687236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.687614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.687641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.687905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.687935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.688315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.688350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.514 [2024-07-15 21:20:27.688703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.514 [2024-07-15 21:20:27.688730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.514 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.689122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.689148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.689518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.689545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.689910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.689936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.690324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.690351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.690739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.690766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.691144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.691171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.691564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.691592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.691960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.691987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.692362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.692391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.692778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.692805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.693183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.693209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.693595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.693623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.694011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.694038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.694322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.694349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.694715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.694742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.695198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.695225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.695577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.695605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.695769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.695794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.696187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.696213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9d4000b90 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Write completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Write completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Write completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Write completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Write completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Write completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Write completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Write completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Write completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 Read completed with error (sct=0, sc=8) 00:30:00.515 starting I/O failed 00:30:00.515 [2024-07-15 21:20:27.696526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.515 [2024-07-15 21:20:27.696913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.696930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.697177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.697187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.697528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.697542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.697778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.697789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.698158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.698168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.698459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.698472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.698816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.698826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.699153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.699163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.699467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.699477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.699766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.699776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.700150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.515 [2024-07-15 21:20:27.700161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.515 qpair failed and we were unable to recover it. 00:30:00.515 [2024-07-15 21:20:27.700529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.700540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.700871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.700881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.701209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.701219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.701491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.701502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.701713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.701725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.702072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.702083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.702343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.702354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.702749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.702760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.703128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.703137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.703473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.703484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.703739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.703751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.703995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.704005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.704271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.704281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.704657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.704667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.705073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.705083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.705263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.705277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.705593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.705604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.705962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.705972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.706345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.706356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.706505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.706519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.706845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.706854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.707190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.707199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.707388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.707400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.707762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.707774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.707976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.707990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.708368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.708380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.708751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.708761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.709102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.709114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.709473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.709483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.709843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.709853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.710235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.710247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.710602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.710612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.710885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.710895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.711086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.711096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.711280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.711291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.516 [2024-07-15 21:20:27.711618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.516 [2024-07-15 21:20:27.711627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.516 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.711979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.711992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.712338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.712348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.712673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.712682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.712912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.712922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.713267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.713276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.713653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.713662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.713926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.713938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.714306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.714315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.714511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.714522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.714769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.714778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.714989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.714999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.715333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.715343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.715552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.715561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.715922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.715931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.716304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.716314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.716691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.716700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.717056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.717065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.717409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.717419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.717801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.717810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.718081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.718091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.718430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.718440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.718696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.718706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.719066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.719075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.719444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.719454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.719695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.719706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.720063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.720073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.720447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.720458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.720802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.720811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.721159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.721177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.721532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.721549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.721918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.721933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.722296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.722311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.722550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.722564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.722932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.722950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.723282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.723298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.723666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.723682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.724037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.724047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.724272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.724281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.724625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.724634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.725002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.725011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.517 [2024-07-15 21:20:27.725349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.517 [2024-07-15 21:20:27.725359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.517 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.725736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.725745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.726110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.726119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.726467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.726476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.726819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.726827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.727164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.727173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.727517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.727527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.727859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.727868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.728203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.728212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.728542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.728552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.728694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.728703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.729011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.729020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.729367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.729378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.729721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.729731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.730094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.730103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.730312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.730323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.730662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.730671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.733521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.733548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.733788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.733800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.734555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.734579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.734927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.734939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.735297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.735308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.735630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.735639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.736082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.736096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.736492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.736503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.736823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.736833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.737119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.737131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.737501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.737511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.737854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.737864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.738189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.738198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.738453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.738463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.738830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.738839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.739183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.739193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.739487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.739497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.739824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.739838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.740177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.740186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.740539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.740549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.740892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.740901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.741274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.741285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.741643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.741652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.741997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.742006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.742399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.518 [2024-07-15 21:20:27.742417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.518 qpair failed and we were unable to recover it. 00:30:00.518 [2024-07-15 21:20:27.742855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.742865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.743218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.743228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.743571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.743580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.743928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.743937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.744282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.744292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.744652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.744661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.745068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.745078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.745337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.745348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.745694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.745703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.746066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.746076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.746279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.746290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.746518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.746527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.746878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.746887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.747253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.747263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.747593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.747602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.747945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.747954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.748195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.748204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.748570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.748580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.748952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.748963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.749213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.749225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.749479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.749488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.749856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.749865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.750234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.750245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.750597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.750607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.750951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.750960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.751281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.751290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.751600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.751610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.751954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.751963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.752288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.752298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.752700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.752710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.753029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.753039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.753382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.753391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.753749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.753758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.754126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.754135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.754481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.754491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.754836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.754845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.755185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.755195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.755519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.755528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.755890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.755899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.756244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.756254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.756588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.756598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.519 qpair failed and we were unable to recover it. 00:30:00.519 [2024-07-15 21:20:27.756960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.519 [2024-07-15 21:20:27.756970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.520 qpair failed and we were unable to recover it. 00:30:00.520 [2024-07-15 21:20:27.757335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.520 [2024-07-15 21:20:27.757345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.520 qpair failed and we were unable to recover it. 00:30:00.520 [2024-07-15 21:20:27.757687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.520 [2024-07-15 21:20:27.757697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.520 qpair failed and we were unable to recover it. 00:30:00.520 [2024-07-15 21:20:27.758111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.520 [2024-07-15 21:20:27.758120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.520 qpair failed and we were unable to recover it. 00:30:00.520 [2024-07-15 21:20:27.758541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.520 [2024-07-15 21:20:27.758550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.520 qpair failed and we were unable to recover it. 00:30:00.520 [2024-07-15 21:20:27.758788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.520 [2024-07-15 21:20:27.758800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.520 qpair failed and we were unable to recover it. 00:30:00.520 [2024-07-15 21:20:27.759107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.520 [2024-07-15 21:20:27.759116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.520 qpair failed and we were unable to recover it. 00:30:00.520 [2024-07-15 21:20:27.759456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.520 [2024-07-15 21:20:27.759466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.520 qpair failed and we were unable to recover it. 00:30:00.520 [2024-07-15 21:20:27.759829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.520 [2024-07-15 21:20:27.759838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.520 qpair failed and we were unable to recover it. 00:30:00.520 [2024-07-15 21:20:27.760201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.520 [2024-07-15 21:20:27.760210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.520 qpair failed and we were unable to recover it. 00:30:00.520 [2024-07-15 21:20:27.760638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.520 [2024-07-15 21:20:27.760648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.520 qpair failed and we were unable to recover it. 00:30:00.520 [2024-07-15 21:20:27.760993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.520 [2024-07-15 21:20:27.761002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:00.520 qpair failed and we were unable to recover it. 00:30:00.520 [2024-07-15 21:20:27.761365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.104 [2024-07-15 21:20:28.122291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.104 qpair failed and we were unable to recover it. 00:30:01.104 [2024-07-15 21:20:28.122719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.104 [2024-07-15 21:20:28.122739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.104 qpair failed and we were unable to recover it. 00:30:01.104 [2024-07-15 21:20:28.122970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.104 [2024-07-15 21:20:28.122981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.104 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.123338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.123352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.123751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.123763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.124120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.124130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.124528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.124541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.124899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.124911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.125225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.125250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.125606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.125617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.125961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.125972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.126325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.126338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.126702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.126713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.126946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.126957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.127296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.127308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.127652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.127663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.128031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.128043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.128388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.128399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.128741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.128753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.129093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.129105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.129463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.129479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.129842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.129854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.130196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.130207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.130590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.130602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.130967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.130978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.131349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.131360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.131702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.131718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.132075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.132086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.132421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.132431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.132760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.132771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.133111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.133122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.133481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.133493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.133855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.133866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.134200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.134211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.134556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.134567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.134905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.134915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.135242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.135253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.135614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.135624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.135966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.135976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.136359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.136370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.136746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.136757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.137099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.137110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.137439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.105 [2024-07-15 21:20:28.137450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.105 qpair failed and we were unable to recover it. 00:30:01.105 [2024-07-15 21:20:28.137787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.137798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.138163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.138174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.138518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.138530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.138871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.138881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.139108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.139119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.139430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.139442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.139777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.139788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.140000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.140012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.140375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.140385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.140720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.140731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.141084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.141095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.141436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.141447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.141787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.141798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.142154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.142165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.142512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.142524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.142948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.142959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.143294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.143306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.143636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.143646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.144025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.144038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.144379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.144389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.144727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.144738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.145102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.145113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.145462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.145473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.145834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.145845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.146183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.146193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.146596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.146607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.146945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.146956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.147298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.147309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.147640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.147650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.148012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.148023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.148385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.148396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.148743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.148753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.149096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.149106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.149421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.149435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.149788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.149800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.150140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.150151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.150482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.150492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.150853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.150863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.151262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.151273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.151616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.151627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.151987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.151997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.106 qpair failed and we were unable to recover it. 00:30:01.106 [2024-07-15 21:20:28.152360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.106 [2024-07-15 21:20:28.152371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.152561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.152572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.152930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.152940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.153318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.153329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.153706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.153719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.154061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.154071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.154246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.154257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.154613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.154624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.154931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.154941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.155303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.155314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.155652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.155663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.155999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.156010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.156366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.156376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.156723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.156734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.157074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.157086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.157428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.157438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.157798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.157809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.158150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.158161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.158362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.158374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.158737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.158748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.159107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.159117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.159484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.159494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.159678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.159688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.160003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.160014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.160368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.160379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.160715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.160726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.161065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.161076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.161416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.161426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.161745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.161757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.161948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.161960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.162267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.162279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.162622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.162632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.162997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.163008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.163354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.163365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.163688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.163699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.163900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.163911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.164322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.164333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.164697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.164708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.165049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.165059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.165401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.107 [2024-07-15 21:20:28.165411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.107 qpair failed and we were unable to recover it. 00:30:01.107 [2024-07-15 21:20:28.165629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.165639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.165980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.165990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.166330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.166341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.166683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.166694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.167055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.167066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.167405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.167417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.167759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.167770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.168107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.168117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.168458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.168469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.168805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.168815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.169155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.169166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.169519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.169531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.169892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.169903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.170248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.170259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.170622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.170633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.171036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.171046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.171372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.171383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.171705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.171716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.172058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.172068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.172344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.172354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.172727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.172737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.173152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.173164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.173455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.173465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.173816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.173827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.174195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.174207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.174544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.174555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.174802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.174812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.175150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.175161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.175541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.175553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.175900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.175911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.176254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.176265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.176588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.176598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.176859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.176872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.177228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.177247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.177536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.177547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.177843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.177853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.178159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.178170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.178517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.178528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.178874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.178885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.179264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.179275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.179601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.108 [2024-07-15 21:20:28.179612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.108 qpair failed and we were unable to recover it. 00:30:01.108 [2024-07-15 21:20:28.179859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.179869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.180189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.180199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.180550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.180561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.180921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.180931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.181274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.181286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.181632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.181643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.181965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.181976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.182331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.182341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.182713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.182725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.183087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.183098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.183433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.183444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.183775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.183786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.184128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.184138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.184358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.184368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.184695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.184706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.184967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.184977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.185345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.185356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.185554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.185565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.185837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.185850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.186215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.186226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.186571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.186582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.186922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.186934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.187270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.187281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.187642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.187652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.187992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.188003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.188327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.188338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.188694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.188704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.189077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.189088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.189437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.189447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.189789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.189799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.190130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.190140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.190494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.190505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.109 [2024-07-15 21:20:28.190816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.109 [2024-07-15 21:20:28.190828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.109 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.191166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.191177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.191388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.191399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.191713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.191724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.192066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.192077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.192419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.192431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.192803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.192814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.193160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.193171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.193401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.193412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.193774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.193786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.194150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.194161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.194579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.194590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.194954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.194965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.195310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.195325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.195721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.195731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.196062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.196072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.196415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.196427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.196817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.196827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.197167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.197178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.197509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.197520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.197881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.197893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.198237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.198249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.198596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.198606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.198965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.198976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.200119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.200144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.200502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.200515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.200755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.200765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.201017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.201028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.201374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.201385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.201735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.201746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.202086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.202097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.202456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.202468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.202804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.202815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.203060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.203071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.203279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.203289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.203615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.203625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.203965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.203975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.204316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.204327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.204672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.204683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.205045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.205056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.205418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.110 [2024-07-15 21:20:28.205428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.110 qpair failed and we were unable to recover it. 00:30:01.110 [2024-07-15 21:20:28.205708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.205720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.206068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.206079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.206537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.206547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.206909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.206919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.207279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.207290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.207623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.207634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.208004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.208014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.208374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.208385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.208668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.208679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.209015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.209026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.209382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.209393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.209720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.209731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.210143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.210154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.210493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.210504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.210866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.210877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.211220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.211234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.211586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.211597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.211933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.211943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.212306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.212317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.212693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.212703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.213045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.213055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.213404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.213414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.213725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.213736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.213979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.213989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.214330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.214341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.214676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.214687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.214895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.214906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.215249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.215260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.215648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.215660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.215903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.215913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.216290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.216301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.216659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.216670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.217016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.217027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.217379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.217390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.217712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.217723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.218058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.218068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.218408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.218419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.218759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.218770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.219129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.219140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.219501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.219513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.219855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.219868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.111 qpair failed and we were unable to recover it. 00:30:01.111 [2024-07-15 21:20:28.220198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.111 [2024-07-15 21:20:28.220209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.220556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.220567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.220906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.220917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.221260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.221272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.221612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.221623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.222684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.222711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.223074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.223086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.223459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.223470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.223815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.223825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.224188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.224200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.224535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.224545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.224889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.224899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.225252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.225265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.225600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.225610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.225949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.225961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.226294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.226306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.226639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.226651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.227019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.227030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.227376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.227387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.227727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.227738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.228080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.228090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.228426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.228437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.228732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.228742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.229083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.229093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.229489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.229499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.229860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.229870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.230172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.230185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.230523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.230534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.230877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.230888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.231251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.231263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.231586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.231597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.231935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.231945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.232712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.232732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.233120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.233132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.233456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.233469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.233808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.233818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.234137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.234148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.234483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.234494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.234834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.234844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.235187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.235197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.235561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.235572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.112 [2024-07-15 21:20:28.235892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.112 [2024-07-15 21:20:28.235903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.112 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.236142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.236152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.236484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.236495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.236826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.236836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.237082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.237093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.237461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.237473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.237812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.237823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.238158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.238168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.238505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.238515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.239325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.239346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.239689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.239700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.240042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.240052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.240422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.240433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.240778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.240790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.241134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.241146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.241492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.241503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.242048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.242069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.242416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.242428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.242783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.242793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.243134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.243145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.243767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.243787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.244141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.244152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.244499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.244510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.244754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.244765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.245123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.245133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.245296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.245307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.245628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.245639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.245976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.245987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.246318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.246328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.246685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.246695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.247036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.247046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.247388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.247400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.247654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.247665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.248014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.248025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.248374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.248385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.248737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.113 [2024-07-15 21:20:28.248747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.113 qpair failed and we were unable to recover it. 00:30:01.113 [2024-07-15 21:20:28.249118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.249129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.249528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.249539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.249742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.249755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.250105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.250116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.250361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.250373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.250737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.250748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.250978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.250989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.251186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.251196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.251567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.251577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.251918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.251929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.252178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.252188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.252513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.252523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.252862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.252874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.253226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.253247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.253558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.253569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.253740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.253750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.254083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.254093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.254418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.254432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.254782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.254793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.255084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.255095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.255340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.255350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.255763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.255773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.256073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.256084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.256402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.256413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.256745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.256755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.256995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.257005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.257341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.257351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.257705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.257716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.258081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.258091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.258391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.258403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.258752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.258763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.259001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.259010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.259376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.259388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.259747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.259758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.260104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.260115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.260474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.260485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.260786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.260797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.261150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.261161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.261518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.261529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.261752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.261762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.114 [2024-07-15 21:20:28.262000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.114 [2024-07-15 21:20:28.262012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.114 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.262375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.262386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.262724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.262735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.263078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.263088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.263430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.263447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.263819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.263829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.264213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.264223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.264616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.264627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.264995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.265005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.265292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.265303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.265656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.265666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.265900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.265910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.266246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.266257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.266368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.266378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.266717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.266727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.266967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.266977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.267343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.267353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.267721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.267732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.268072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.268082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.268472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.268483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.268846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.268857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.269151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.269162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.269511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.269521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.269865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.269875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.270242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.270253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.270572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.270582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.270917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.270927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.271267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.271278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.271612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.271622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.271963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.271973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.272171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.272182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.272523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.272535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.272865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.272876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.273280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.273290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.273649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.273660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.273998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.274009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.274373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.274384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.274723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.274733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.274894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.274904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.275254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.275264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.275516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.115 [2024-07-15 21:20:28.275526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.115 qpair failed and we were unable to recover it. 00:30:01.115 [2024-07-15 21:20:28.275742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.275751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.276077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.276088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.276430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.276441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.276831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.276842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.277236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.277247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.277480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.277490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.277733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.277743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.278126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.278136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.278482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.278494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.278833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.278844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.279184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.279196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.279532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.279542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.279885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.279896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.280231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.280243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.280602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.280612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.280733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.280743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.281123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.281133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.281496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.281507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.281851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.281861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.282226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.282239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.282514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.282524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.282900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.282912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.283325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.283336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.283700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.283711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.283916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.283927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.284272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.284283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.284625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.284635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.284984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.284994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.285332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.285343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.285698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.285709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.286038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.286048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.286383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.286394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.286747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.286758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.287103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.287113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.287348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.287359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.287680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.287690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.288036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.288047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.288390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.288400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.288621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.288631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.288963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.288973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.289317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.289328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.116 qpair failed and we were unable to recover it. 00:30:01.116 [2024-07-15 21:20:28.289683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.116 [2024-07-15 21:20:28.289694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.290028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.290039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.290391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.290402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.290746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.290756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.291097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.291108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.291442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.291453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.291704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.291714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.292061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.292071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.292372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.292384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.292605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.292615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.293002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.293012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.293306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.293317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.293672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.293682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.294020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.294031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.294381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.294392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.294738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.294749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.295093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.295103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.295439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.295451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.295684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.295694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.296033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.296044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.296376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.296386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.296668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.296679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.296991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.297001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.297286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.297297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.297655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.297665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.298003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.298014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.298376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.298387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.298744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.298754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.299096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.299106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.299448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.299460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.299713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.299723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.300095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.300105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.300363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.300374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.300752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.300763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.301133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.301144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.117 qpair failed and we were unable to recover it. 00:30:01.117 [2024-07-15 21:20:28.301516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.117 [2024-07-15 21:20:28.301527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.301860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.301870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.302067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.302078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.302425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.302436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.302784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.302795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.303143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.303154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.303509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.303519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.303838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.303849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.304045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.304056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.304319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.304331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.304667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.304677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.305046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.305057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.305386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.305397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.305774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.305784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.306139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.306149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.306500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.306510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.306830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.306841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.307200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.307210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.307552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.307563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.307890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.307901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.308240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.308251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.308509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.308518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.308863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.308873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.309120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.309133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.309393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.309404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.309635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.309645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.309954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.309965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.310331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.310342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.310663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.310674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.311025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.311036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.311371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.311382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.311712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.311722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.312149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.312159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.312502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.312513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.312938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.312948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.313089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.313099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.313249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.313259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.313611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.313621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.313900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.313910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.314270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.314281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.314600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.118 [2024-07-15 21:20:28.314610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.118 qpair failed and we were unable to recover it. 00:30:01.118 [2024-07-15 21:20:28.314824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.314835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.315197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.315207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.315587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.315600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.315885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.315896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.316234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.316245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.316494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.316504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.316818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.316828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.317069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.317079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.317421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.317432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.317771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.317782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.318135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.318145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.318492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.318503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.318846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.318857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.319203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.319213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.319564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.319575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.319916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.319926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.320237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.320248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.320489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.320499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.320857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.320868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.321194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.321205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.321545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.321556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.321793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.321803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.322125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.322136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.322468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.322479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.322727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.322737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.322978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.322989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.323310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.323320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.323556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.323566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.323982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.323994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.324318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.324329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.324696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.324706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.325052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.325062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.325406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.325417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.325770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.325780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.326148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.326158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.326493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.326504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.326829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.326842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.327184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.327194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.327533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.327545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.327904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.327914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.328245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.328255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.119 [2024-07-15 21:20:28.328570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.119 [2024-07-15 21:20:28.328581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.119 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.328943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.328953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.329294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.329305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.329502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.329513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.329816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.329826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.330146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.330156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.330528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.330539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.330817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.330829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.331200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.331210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.331555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.331566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.331834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.331845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.332193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.332204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.332547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.332558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.332894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.332905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.333244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.333255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.333602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.333612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.333931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.333942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.334261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.334272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.334475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.334486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.334694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.334704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.335041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.335052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.335394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.335405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.335570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.335582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 Read completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Read completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Read completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Read completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Read completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Read completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Read completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Read completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Read completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Read completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Read completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Read completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Read completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Write completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Write completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Write completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Write completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Write completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Read completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Write completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Write completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Write completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Read completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Read completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Write completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Write completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Write completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Read completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Read completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Read completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Write completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 Write completed with error (sct=0, sc=8) 00:30:01.120 starting I/O failed 00:30:01.120 [2024-07-15 21:20:28.336318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.120 [2024-07-15 21:20:28.336721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.336763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9e4000b90 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.337140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.337169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9e4000b90 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.337581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.337669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9e4000b90 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.337895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.337909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.338268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.338279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.338622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.338632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.120 [2024-07-15 21:20:28.338999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.120 [2024-07-15 21:20:28.339010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.120 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.339336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.339348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.339696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.339706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.339941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.339951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.340247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.340258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.340594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.340604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.340885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.340897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.341133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.341144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.341485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.341495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.341815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.341826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.342168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.342179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.342395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.342406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.342769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.342779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.343030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.343042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.343276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.343286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.343598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.343609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.343979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.343990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.344334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.344345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.344618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.344629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.344864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.344874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.345222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.345236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.345591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.345602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.345907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.345918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.346251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.346263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.346596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.346606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.346971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.346982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.347330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.347341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.347694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.347705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.347954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.347964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.348330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.348340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.348693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.348704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.349042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.349053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.349316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.349326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.349654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.349664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.349945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.349956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.350301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.350311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.350646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.350657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.351006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.351016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.351353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.351365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.351706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.351716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.352051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.352061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.352309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.121 [2024-07-15 21:20:28.352319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.121 qpair failed and we were unable to recover it. 00:30:01.121 [2024-07-15 21:20:28.352652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.352662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.352995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.353006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.353260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.353270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.353608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.353619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.353861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.353871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.354217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.354227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.354563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.354573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.354934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.354945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.355291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.355302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.355699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.355710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.356053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.356063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.356398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.356409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.356762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.356775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.357024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.357034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.357427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.357437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.357783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.357794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.358127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.358137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.358378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.358388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.358748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.358758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.359140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.359151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.359492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.359503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.359800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.359812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.360149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.360159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.360499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.360512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.360834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.360845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.361208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.361219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.361480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.361491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.361740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.361751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.362086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.362097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.362346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.362357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.362691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.362702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.363044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.363056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.363420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.363431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.363780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.363791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.364149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.364159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.122 qpair failed and we were unable to recover it. 00:30:01.122 [2024-07-15 21:20:28.364407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.122 [2024-07-15 21:20:28.364418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.364686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.364696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.365043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.365053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.365318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.365328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.365547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.365560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.365909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.365920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.366336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.366347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.366721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.366735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.366982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.366993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.367340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.367357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.367707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.367718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.368061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.368072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.368415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.368427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.368783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.368794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.369143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.369153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.369563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.369574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.369920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.369931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.370306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.370318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.370692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.370704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.370956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.370966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.371297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.371309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.371705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.371715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.372044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.372057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.372338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.372349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.372692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.372703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.373048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.373060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.373391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.373407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.373721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.373732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.374056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.374067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.374429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.374442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.374683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.374694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.375038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.375050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.375454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.375466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.375852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.375863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.376073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.376085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.376361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.376372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.377308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.377334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.377684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.377696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.378062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.378073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.378417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.378429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.123 [2024-07-15 21:20:28.378776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.123 [2024-07-15 21:20:28.378787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.123 qpair failed and we were unable to recover it. 00:30:01.124 [2024-07-15 21:20:28.379151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.124 [2024-07-15 21:20:28.379162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.124 qpair failed and we were unable to recover it. 00:30:01.124 [2024-07-15 21:20:28.379565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.124 [2024-07-15 21:20:28.379577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.124 qpair failed and we were unable to recover it. 00:30:01.124 [2024-07-15 21:20:28.379912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.124 [2024-07-15 21:20:28.379924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.124 qpair failed and we were unable to recover it. 00:30:01.124 [2024-07-15 21:20:28.380280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.124 [2024-07-15 21:20:28.380292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.124 qpair failed and we were unable to recover it. 00:30:01.124 [2024-07-15 21:20:28.380778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.124 [2024-07-15 21:20:28.380790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.124 qpair failed and we were unable to recover it. 00:30:01.124 [2024-07-15 21:20:28.381123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.124 [2024-07-15 21:20:28.381135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.124 qpair failed and we were unable to recover it. 00:30:01.124 [2024-07-15 21:20:28.381486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.124 [2024-07-15 21:20:28.381498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.124 qpair failed and we were unable to recover it. 00:30:01.124 [2024-07-15 21:20:28.381847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.124 [2024-07-15 21:20:28.381859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.124 qpair failed and we were unable to recover it. 00:30:01.124 [2024-07-15 21:20:28.382161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.124 [2024-07-15 21:20:28.382172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.124 qpair failed and we were unable to recover it. 00:30:01.124 [2024-07-15 21:20:28.382516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.124 [2024-07-15 21:20:28.382528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.124 qpair failed and we were unable to recover it. 00:30:01.396 [2024-07-15 21:20:28.382889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.382901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.383183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.383195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.383422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.383434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.383761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.383773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.384169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.384181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.384543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.384555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.384973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.384985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.385346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.385358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.385726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.385738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.386077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.386090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.386514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.386525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.386777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.386788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.387181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.387192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.387558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.387570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.387784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.387794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.388119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.388130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.388467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.388478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.388844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.388856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.389200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.389212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.389545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.389557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.389974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.389985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.390327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.390344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.390707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.390718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.391143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.391154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.391474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.391486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.391816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.391828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.392213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.392224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.392584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.392596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.392908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.392919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.393249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.393261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.393604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.393616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.393988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.393999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.394367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.394378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.394745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.394758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.395114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.395126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.395483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.395495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.395891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.395903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.396240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.396251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.396615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.396628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.397 [2024-07-15 21:20:28.396954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.397 [2024-07-15 21:20:28.396966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.397 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.397296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.397307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.397661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.397673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.398023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.398035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.398275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.398285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.398582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.398593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.398946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.398956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.399192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.399202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.399530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.399542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.399883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.399897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.400259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.400270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.400613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.400623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.400963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.400974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.401336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.401347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.401654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.401667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.402011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.402022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.402362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.402373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.402742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.402753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.403087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.403098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.403340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.403351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.403701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.403712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.404050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.404062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.404395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.404407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.404624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.404635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.404890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.404901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.405239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.405251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.405598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.405609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.405965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.405976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.406172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.406184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.406533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.406545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.406871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.406882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.407226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.407246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.407586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.407596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.407836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.407849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.408187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.408198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.408561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.408573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.408906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.408921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.409147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.409162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.409372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.409384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.409724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.409735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.410022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.410034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.398 qpair failed and we were unable to recover it. 00:30:01.398 [2024-07-15 21:20:28.410374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.398 [2024-07-15 21:20:28.410386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.410718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.410730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.411070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.411082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.411333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.411346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.411603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.411615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.411966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.411978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.412326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.412338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.412690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.412702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.413068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.413080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.413430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.413442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.413784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.413796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.414142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.414154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.414384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.414396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.414771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.414783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.415086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.415099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.415302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.415313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.415715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.415727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.416055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.416066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.416430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.416442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.416780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.416792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.417155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.417166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.417510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.417523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.417857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.417871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.418220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.418240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.418597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.418609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.418971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.418982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.419347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.419360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.419700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.419713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.420072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.420086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.420444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.420456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.420791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.420803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.421146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.421158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.421516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.421528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.421891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.421903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.422244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.422256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.422700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.422713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.423084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.423096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.423464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.423476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.423818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.423829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.424173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.424187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.424553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.399 [2024-07-15 21:20:28.424564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.399 qpair failed and we were unable to recover it. 00:30:01.399 [2024-07-15 21:20:28.424928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.424941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.425283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.425294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.425639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.425649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.426024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.426037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.426388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.426400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.426725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.426736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.427077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.427089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.427335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.427345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.427707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.427718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.428065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.428076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.428415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.428427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.428793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.428805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.429169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.429180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.429526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.429537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.429876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.429887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.430237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.430248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.430611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.430622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.430962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.430975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.431336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.431348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.431713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.431725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.432092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.432103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.432454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.432465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.432790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.432802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.433158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.433168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.433516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.433528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.433887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.433898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.434243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.434256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.434496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.434507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.434869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.434880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.435219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.435235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.435601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.435613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.435949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.435960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.436320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.436331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.436687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.436699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.437038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.437049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.437414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.437425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.437797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.437809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.438141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.438152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.438517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.438528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.438893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.438904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.439244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.439256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.400 qpair failed and we were unable to recover it. 00:30:01.400 [2024-07-15 21:20:28.439605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.400 [2024-07-15 21:20:28.439616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.401 qpair failed and we were unable to recover it. 00:30:01.401 [2024-07-15 21:20:28.439951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.401 [2024-07-15 21:20:28.439963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.401 qpair failed and we were unable to recover it. 00:30:01.401 [2024-07-15 21:20:28.440322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.401 [2024-07-15 21:20:28.440333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.401 qpair failed and we were unable to recover it. 00:30:01.401 [2024-07-15 21:20:28.440684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.401 [2024-07-15 21:20:28.440696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.401 qpair failed and we were unable to recover it. 00:30:01.401 [2024-07-15 21:20:28.441037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.401 [2024-07-15 21:20:28.441048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.401 qpair failed and we were unable to recover it. 00:30:01.401 [2024-07-15 21:20:28.441389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.401 [2024-07-15 21:20:28.441401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.401 qpair failed and we were unable to recover it. 00:30:01.401 [2024-07-15 21:20:28.441764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.401 [2024-07-15 21:20:28.441775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.401 qpair failed and we were unable to recover it. 00:30:01.401 [2024-07-15 21:20:28.442102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.401 [2024-07-15 21:20:28.442116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.401 qpair failed and we were unable to recover it. 00:30:01.401 [2024-07-15 21:20:28.442507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.401 [2024-07-15 21:20:28.442522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.401 qpair failed and we were unable to recover it. 00:30:01.401 [2024-07-15 21:20:28.442883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.442894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.443249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.443261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.443625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.443636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.444012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.444023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.444374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.444393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.444704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.444715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.445562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.445584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.445930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.445942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.446738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.446758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.446987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.446998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.447368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.447379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.447741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.447751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.448092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.448102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.448439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.448450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.448782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.448792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.449112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.449123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.449529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.449541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.449848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.449859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.450153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.450164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.450517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.450527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.450867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.450878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.451251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.451262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.451608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.451618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.451961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.451972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.452310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.452321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.452655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.452666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.452994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.453007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.453348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.453359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.454178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.402 [2024-07-15 21:20:28.454198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.402 qpair failed and we were unable to recover it. 00:30:01.402 [2024-07-15 21:20:28.454574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.454586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.455518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.455541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.455963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.455974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.456743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.456764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.457103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.457115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.457450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.457462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.457802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.457813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.458011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.458022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.458356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.458367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.458710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.458721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.459636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.459657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.460006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.460018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.460972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.460996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.461191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.461204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.461564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.461576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.461917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.461928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.462141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.462152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.462495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.462506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.462848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.462859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.463232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.463245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.463576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.463587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.463916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.463927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.464131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.464142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.464561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.464572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.464928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.464940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.465755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.465777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.466021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.466032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.466373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.466384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.466706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.466717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.467046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.467057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.467398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.467409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.467743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.467754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.468116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.468127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.468334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.468346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.468695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.468706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.468950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.468960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.469329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.469339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.469690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.469700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.470062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.470074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.470480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.470491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.471213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.471239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.403 qpair failed and we were unable to recover it. 00:30:01.403 [2024-07-15 21:20:28.471600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.403 [2024-07-15 21:20:28.471613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.472294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.472313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.473018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.473040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.473403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.473416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.473778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.473790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.474130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.474141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.474496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.474508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.474805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.474814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.475147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.475158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.475400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.475411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.475795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.475807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.476143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.476153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.476516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.476527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.476806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.476816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.477210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.477222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.477477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.477487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.477844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.477855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.478193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.478204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.478555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.478565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.478929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.478940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.479283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.479294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.479596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.479607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.479826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.479838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.480200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.480211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.480557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.480571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.480933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.480944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.481282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.481294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.481627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.481638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.481979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.481990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.482207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.482217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.482556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.482568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.482937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.482948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.483454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.483492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.483848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.483861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.484219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.484237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.484593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.484603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.484971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.484982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.485408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.485446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.485807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.485820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.486185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.486197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.486472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.486483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.404 qpair failed and we were unable to recover it. 00:30:01.404 [2024-07-15 21:20:28.486820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.404 [2024-07-15 21:20:28.486830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.487259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.487271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.487503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.487513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.487854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.487865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.488168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.488179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.488512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.488523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.488852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.488863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.489100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.489112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.489449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.489460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.489799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.489809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.490129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.490142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.490481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.490492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.490905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.490915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.491245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.491257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.491458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.491470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.491823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.491834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.492175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.492186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.492504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.492516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.492763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.492773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.493173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.493184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.493516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.493527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.493771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.493781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.494144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.494154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.494500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.494511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.494888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.494898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.495235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.495247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.495581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.495592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.495964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.495974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.496319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.496331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.496680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.496692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.496918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.496929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.497255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.497267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.497635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.497645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.497982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.497993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.498336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.498349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.498711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.498722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.499011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.499022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.499361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.499374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.499732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.499742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.500111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.500121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.500434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.500445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.405 [2024-07-15 21:20:28.500781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.405 [2024-07-15 21:20:28.500791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.405 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.501109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.501119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.501453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.501464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.501670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.501681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.502054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.502065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.502406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.502417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.502627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.502638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.503054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.503065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.503523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.503534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.503868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.503879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.504226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.504243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.504587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.504597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.504898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.504909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.505255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.505265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.505652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.505663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.505991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.506002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.506290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.506301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.506662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.506672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.507018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.507030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.507411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.507421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.507800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.507811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.508135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.508146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.508483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.508493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.508851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.508861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.509188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.509199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.509535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.509546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.509894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.509904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.510250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.510261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.510616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.510627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.511002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.511013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.511378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.511389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.511732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.511743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.512111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.512121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.512478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.512490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.512853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.512862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.513205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.513215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.406 [2024-07-15 21:20:28.513565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.406 [2024-07-15 21:20:28.513576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.406 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.513925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.513938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.514149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.514160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.514496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.514508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.514860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.514871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.515236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.515248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.515567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.515578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.515966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.515976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.516309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.516321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.516707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.516717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.517050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.517061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.517415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.517426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.517658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.517668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.517929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.517940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.518292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.518303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.518559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.518569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.519010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.519020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.519374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.519384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.519733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.519744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.520089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.520100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.520526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.520537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.520896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.520907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.521253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.521263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.521608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.521619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.521957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.521968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.522847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.522870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.523238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.523250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.524227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.524257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.524613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.524627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.524993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.525004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.525370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.525381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.525724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.525735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.526127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.526137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.526438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.526449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.526800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.526812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.527171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.527182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.527517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.527528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.527868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.527879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.528152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.528164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.528502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.528514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.528854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.407 [2024-07-15 21:20:28.528866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.407 qpair failed and we were unable to recover it. 00:30:01.407 [2024-07-15 21:20:28.529223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.529239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.529579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.529590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.529937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.529947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.530296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.530307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.530692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.530702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.530921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.530931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.531251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.531262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.531617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.531628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.531996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.532007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.532345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.532355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.532702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.532713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.533087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.533097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.533247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.533257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.533451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.533461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.533784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.533796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.534161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.534172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.534501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.534513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.534900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.534911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.535040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.535049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.535394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.535405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.535758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.535770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.536022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.536033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.536374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.536385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.536727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.536738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.537095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.537107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.537445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.537456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.537790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.537801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.538119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.538130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.538459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.538470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.538802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.538813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.539059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.539071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.539444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.539456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.539827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.539838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.540090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.540101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.540472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.540483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.540825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.540836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.541207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.541218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.541494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.541507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.541880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.541891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.542220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.542234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.542538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.542549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-15 21:20:28.542886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.408 [2024-07-15 21:20:28.542897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.543246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.543258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.543590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.543601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.543912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.543923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.544130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.544140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.544494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.544505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.544843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.544854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.545226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.545239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.545531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.545542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.545887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.545899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.546220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.546234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.546578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.546588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.546926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.546937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.547140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.547151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.547515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.547526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.547853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.547864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.548122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.548132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.548545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.548556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.548839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.548851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.549198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.549208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.549588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.549599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.549945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.549956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.550300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.550311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.550620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.550630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.551000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.551011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.551352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.551363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.551703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.551714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.551999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.552009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.552360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.552371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.552747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.552758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.553044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.553055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.553423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.553434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.553813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.553823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.554189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.554200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.554543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.554553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.554879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.554891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.555240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.555250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.555560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.555570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.555906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.555918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.556303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.556314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.556673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.556683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-15 21:20:28.557003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.409 [2024-07-15 21:20:28.557016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.557296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.557307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.557655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.557665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.558000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.558010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.558346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.558358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.558642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.558652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.558967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.558979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.559223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.559246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.559553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.559564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.559860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.559871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.560111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.560122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.560356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.560366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.560713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.560723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.561019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.561030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.561379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.561390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.561725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.561736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.561961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.561972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.562260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.562271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.562489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.562500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.562842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.562853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.563160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.563171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.563573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.563584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.563952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.563962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.564204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.564215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.564444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.564456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.564783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.564795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.565153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.565164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.565561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.565574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.565791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.565802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.566068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.566079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.566342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.566353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.566571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.566582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.566994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.567004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.567343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.567354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.567594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.567605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.567835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.567845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-15 21:20:28.568206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.410 [2024-07-15 21:20:28.568217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.568586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.568597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.568978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.568988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.569256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.569266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.569515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.569526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.569886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.569898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.570273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.570284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.570664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.570674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.571013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.571023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.571333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.571345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.571676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.571686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.572007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.572017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.572380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.572391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.572736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.572748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.573055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.573066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.573348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.573359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.573713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.573724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.574068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.574079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.574414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.574425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.574785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.574796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.574995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.575007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.575313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.575324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.575657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.575668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.576002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.576014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.576318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.576329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.576696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.576706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.577067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.577077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.577260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.577271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.577625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.577637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.577981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.577992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.578336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.578347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.578716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.578727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.579069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.579080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.579425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.579436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.579800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.579811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.580157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.580168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.580422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.580432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.580531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.411 [2024-07-15 21:20:28.580541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:01.411 qpair failed and we were unable to recover it. 00:30:01.411 [2024-07-15 21:20:28.580729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d750 is same with the state(5) to be set 00:30:01.411 Read completed with error (sct=0, sc=8) 00:30:01.411 starting I/O failed 00:30:01.411 Read completed with error (sct=0, sc=8) 00:30:01.411 starting I/O failed 00:30:01.411 Read completed with error (sct=0, sc=8) 00:30:01.411 starting I/O failed 00:30:01.411 Read completed with error (sct=0, sc=8) 00:30:01.411 starting I/O failed 00:30:01.411 Read completed with error (sct=0, sc=8) 00:30:01.411 starting I/O failed 00:30:01.411 Read completed with error (sct=0, sc=8) 00:30:01.411 starting I/O failed 00:30:01.411 Write completed with error (sct=0, sc=8) 00:30:01.411 starting I/O failed 00:30:01.411 Read completed with error (sct=0, sc=8) 00:30:01.411 starting I/O failed 00:30:01.411 Write completed with error (sct=0, sc=8) 00:30:01.411 starting I/O failed 00:30:01.411 Write completed with error (sct=0, sc=8) 00:30:01.411 starting I/O failed 00:30:01.411 Read completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Write completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Write completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Write completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Write completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Read completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Write completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Read completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Read completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Read completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Write completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Read completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Write completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Write completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Read completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Write completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Read completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Write completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Write completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Read completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Read completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 Write completed with error (sct=0, sc=8) 00:30:01.412 starting I/O failed 00:30:01.412 [2024-07-15 21:20:28.581115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.412 [2024-07-15 21:20:28.581606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.581638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.582070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.582079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.582485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.582514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.582860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.582869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.583015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.583024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.583266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.583277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.583662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.583670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.583834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.583842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.584213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.584221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.584577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.584585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.584880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.584889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.585105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.585113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.585388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.585397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.585719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.585726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.585893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.585900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.586042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.586049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.586347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.586355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.586689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.586697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.586995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.587003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.587340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.587348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.587727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.587735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.588068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.588076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.588434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.588441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.588813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.588822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.589196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.589204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.589532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.589541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.589889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.589897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.590250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.590258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.590599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.590607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.590958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.590966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.591312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.591320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.591541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.591548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.412 [2024-07-15 21:20:28.591885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.412 [2024-07-15 21:20:28.591893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.412 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.592138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.592146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.592364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.592372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.592733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.592741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.593075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.593083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.593427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.593436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.593765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.593773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.594036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.594044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.594301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.594312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.594532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.594541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.594869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.594877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.595236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.595244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.595583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.595591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.595924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.595933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.596269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.596277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.596486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.596494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.596746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.596753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.597048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.597055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.597410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.597419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.597775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.597782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.598029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.598037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.598480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.598488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.598901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.598908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.599152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.599160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.599572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.599580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.599915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.599923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.600266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.600275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.600548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.600556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.600892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.600899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.601096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.601104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.601375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.601383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.601754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.601762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.602097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.602105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.602447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.602455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.602813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.602821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.603029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.603037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.603391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.603399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.603748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.603757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.604124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.604132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.604392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.604400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.604794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.604801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.413 [2024-07-15 21:20:28.605008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.413 [2024-07-15 21:20:28.605015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.413 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.605283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.605291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.605560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.605568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.605922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.605929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.606303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.606311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.606535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.606543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.606797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.606806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.607010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.607021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.607212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.607221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.607562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.607571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.607907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.607915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.608253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.608262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.608618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.608625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.608871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.608878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.609216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.609224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.609552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.609560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.609778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.609786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.610059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.610067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.610288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.610295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.610623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.610630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.610973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.610981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.611318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.611327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.611674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.611682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.611917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.611924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.612181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.612189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.612577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.612585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.612992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.613000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.613323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.613331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.613700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.613708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.614057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.614065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.614404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.614411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.614738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.614746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.615091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.615099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.615505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.615513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.615838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.615846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.616151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.616160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.616382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.616390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.414 qpair failed and we were unable to recover it. 00:30:01.414 [2024-07-15 21:20:28.616701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.414 [2024-07-15 21:20:28.616708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.617045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.617053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.617276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.617283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.617659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.617666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.618010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.618018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.618297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.618304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.618616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.618623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.618964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.618972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.619210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.619217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.619610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.619618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.619958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.619969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.620314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.620322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.620516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.620524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.620762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.620771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.621121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.621130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.621434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.621442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.621783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.621791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.622036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.622045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.622421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.622429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.622781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.622789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.623190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.623197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.623557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.623565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.623773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.623781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.624057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.624064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.624336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.624344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.624712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.624721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.624953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.624962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.625338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.625346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.625562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.625570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.625878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.625886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.626187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.626196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.626617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.626626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.626873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.626881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.627175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.627182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.627513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.627521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.415 qpair failed and we were unable to recover it. 00:30:01.415 [2024-07-15 21:20:28.627763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.415 [2024-07-15 21:20:28.627771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.627993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.628001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.628220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.628228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.628541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.628550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.628912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.628919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.629165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.629173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.629538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.629546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.629901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.629909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.630282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.630292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.630654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.630662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.631000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.631007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.631249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.631257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.631654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.631662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.631989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.631997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.632338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.632346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.632666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.632676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.633052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.633060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.633400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.633408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.633754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.633761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.634044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.634052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.634374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.634382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.634653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.634661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.634895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.634902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.635294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.635302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.635723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.635731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.635983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.635990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.636323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.636331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.636733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.636741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.636960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.636968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.637306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.637314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.637666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.637674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.638050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.638058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.638405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.638414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.638752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.638759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.638994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.639001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.639294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.639302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.639660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.639667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.639907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.639915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.640296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.640304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.640654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.640662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.640991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.640999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.416 qpair failed and we were unable to recover it. 00:30:01.416 [2024-07-15 21:20:28.641326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.416 [2024-07-15 21:20:28.641335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.641683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.641691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.642029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.642036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.642342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.642349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.642693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.642701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.642915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.642922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.643131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.643139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.643505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.643513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.643839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.643846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.644225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.644235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.644510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.644517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.644865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.644872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.645284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.645292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.645634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.645642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.646018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.646027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.646382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.646390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.646550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.646559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.646911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.646918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.647257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.647265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.647484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.647492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.647851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.647860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.648156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.648164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.648476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.648484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.648681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.648689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.648903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.648910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.649117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.649124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.649321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.649329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.649622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.649629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.649964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.649971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.650286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.650294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.650503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.650511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.650740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.650748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.650950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.650957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.651103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.651112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.651433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.651441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.651808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.651816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.652115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.652123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.652481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.652489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.652831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.652839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.653060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.653067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.653320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.417 [2024-07-15 21:20:28.653329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.417 qpair failed and we were unable to recover it. 00:30:01.417 [2024-07-15 21:20:28.653582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.653590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.653938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.653946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.654295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.654303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.654683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.654691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.655029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.655037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.655506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.655514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.655833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.655841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.656213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.656220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.656523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.656532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.656869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.656877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.657212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.657220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.657570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.657579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.657960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.657968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.658312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.658322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.658672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.658679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.659037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.659044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.659401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.659409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.659657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.659664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.659889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.659896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.660237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.660245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.660551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.660559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.660891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.660898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.661095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.661102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.661363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.661371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.661683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.661690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.662060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.662069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.662316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.662325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.662666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.662673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.663010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.663020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.663389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.663397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.663598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.663607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.663943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.663951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.664277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.664285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.664355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.664362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.664662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.664670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.664924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.664932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.665268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.665276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.665499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.665506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.665893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.665901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.666108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.666115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.666299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.418 [2024-07-15 21:20:28.666307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.418 qpair failed and we were unable to recover it. 00:30:01.418 [2024-07-15 21:20:28.666354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.666362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.666597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.666605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.666942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.666949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.667322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.667329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.667683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.667691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.668063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.668071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.668398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.668406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.668751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.668759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.669097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.669106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.669505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.669513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.669860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.669869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.670246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.670254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.670639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.670649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.670891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.670898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.671243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.671251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.671624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.671632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.671969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.671977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.672300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.672308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.672524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.672532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.672871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.672879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.673264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.673272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.673574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.673591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.673778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.673786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.674107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.674115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.674545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.674553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.419 [2024-07-15 21:20:28.674899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.419 [2024-07-15 21:20:28.674906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.419 qpair failed and we were unable to recover it. 00:30:01.694 [2024-07-15 21:20:28.675118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.694 [2024-07-15 21:20:28.675127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.694 qpair failed and we were unable to recover it. 00:30:01.694 [2024-07-15 21:20:28.675480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.694 [2024-07-15 21:20:28.675488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.694 qpair failed and we were unable to recover it. 00:30:01.694 [2024-07-15 21:20:28.675694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.694 [2024-07-15 21:20:28.675701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.694 qpair failed and we were unable to recover it. 00:30:01.694 [2024-07-15 21:20:28.675912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.694 [2024-07-15 21:20:28.675920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.694 qpair failed and we were unable to recover it. 00:30:01.694 [2024-07-15 21:20:28.676292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.694 [2024-07-15 21:20:28.676301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.694 qpair failed and we were unable to recover it. 00:30:01.694 [2024-07-15 21:20:28.676673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.694 [2024-07-15 21:20:28.676681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.694 qpair failed and we were unable to recover it. 00:30:01.694 [2024-07-15 21:20:28.677009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.694 [2024-07-15 21:20:28.677016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.694 qpair failed and we were unable to recover it. 00:30:01.694 [2024-07-15 21:20:28.677393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.694 [2024-07-15 21:20:28.677400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.694 qpair failed and we were unable to recover it. 00:30:01.694 [2024-07-15 21:20:28.677752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.694 [2024-07-15 21:20:28.677761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.694 qpair failed and we were unable to recover it. 00:30:01.694 [2024-07-15 21:20:28.678098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.694 [2024-07-15 21:20:28.678105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.694 qpair failed and we were unable to recover it. 00:30:01.694 [2024-07-15 21:20:28.678468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.694 [2024-07-15 21:20:28.678476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.678722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.678729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.679110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.679119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.679420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.679428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.679815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.679823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.680155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.680163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.680505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.680514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.680847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.680854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.681174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.681181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.681495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.681503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.681836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.681843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.682210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.682217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.682450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.682458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.682856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.682864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.683245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.683253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.683623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.683631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.683976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.683985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.684295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.684304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.684550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.684557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.684796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.684803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.685023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.685031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.685332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.685340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.685701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.685709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.686076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.686084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.686444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.686452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.686786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.686794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.687148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.687155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.687502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.687510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.687866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.687874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.688205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.688213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.688475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.688483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.688828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.688835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.689211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.689218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.689379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.689387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.689622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.689630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.689944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.689951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.690152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.690159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.690537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.690545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.690884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.690891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.691228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.691239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.695 [2024-07-15 21:20:28.691664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.695 [2024-07-15 21:20:28.691672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.695 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.691894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.691901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.692288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.692295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.692688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.692696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.692993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.693001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.693104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.693113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.693477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.693485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.693913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.693920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.694262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.694270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.694621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.694629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.694962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.694971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.695310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.695317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.695482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.695489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.695825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.695833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.696168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.696176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.696533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.696541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.696884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.696894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.697243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.697251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.697594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.697603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.697807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.697816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.698152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.698160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.698562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.698570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.698881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.698889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.699245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.699252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.699646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.699654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.700042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.700050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.700395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.700403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.700623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.700631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.701005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.701012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.701392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.701400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.701630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.701638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.701968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.701976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.702314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.702322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.702664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.702671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.702994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.703002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.703353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.703361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.703714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.703722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.703970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.703977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.704298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.704305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.704733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.704740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.705077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.705085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.696 [2024-07-15 21:20:28.705429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.696 [2024-07-15 21:20:28.705437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.696 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.705696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.705704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.705892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.705899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.706241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.706249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.706600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.706607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.706938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.706946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.707288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.707297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.707550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.707558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.707804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.707812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.708104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.708112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.708242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.708250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.708584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.708591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.708709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.708716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.708958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.708966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.709310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.709318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.709538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.709547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.709934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.709941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.710271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.710279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.710594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.710602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.710851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.710858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.711235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.711243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.711589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.711597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.711944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.711951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.712274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.712282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.712593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.712601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.712943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.712951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.713313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.713321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.713704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.713712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.714056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.714063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.714365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.714373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.714711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.714718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.715064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.715071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.715422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.715431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.715776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.715784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.716137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.716145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.716469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.716478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.716812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.716821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.717064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.717072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.717293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.717301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.717628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.717635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.717965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.697 [2024-07-15 21:20:28.717973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.697 qpair failed and we were unable to recover it. 00:30:01.697 [2024-07-15 21:20:28.718215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.718222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.718541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.718549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.718898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.718906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.719269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.719277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.719641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.719649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.719982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.719990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.720206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.720213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.720406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.720413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.720730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.720738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.721106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.721114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.721308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.721316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.721634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.721642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.722019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.722027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.722312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.722321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.722481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.722490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.722845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.722853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.723196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.723204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.723468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.723475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.723894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.723901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.724241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.724248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.724495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.724502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.724833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.724841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.725170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.725177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.725535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.725543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.725774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.725783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.726210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.726218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.726567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.726575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.726907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.726914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.727155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.727163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.727501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.727510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.727881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.727889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.728173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.728182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.728563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.728570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.728911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.728919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.729271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.729279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.729614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.729622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.698 [2024-07-15 21:20:28.729987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.698 [2024-07-15 21:20:28.729995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.698 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.730204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.730212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.730556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.730565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.730929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.730937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.731280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.731288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.731542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.731549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.731743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.731751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.732106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.732114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.732461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.732469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.732757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.732764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.732865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.732872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.733175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.733183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.733565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.733572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.733921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.733929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.734118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.734126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.734500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.734508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.734870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.734878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.735098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.735107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.735462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.735471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.735808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.735816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.736067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.736074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.736285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.736293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.736594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.736603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.736806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.736814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.737121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.737129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.737278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.737288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.737738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.737745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.738146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.738153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.738490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.738498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.738834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.738843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.739185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.739193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.739453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.739460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.739796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.739804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.739987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.739995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.740353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.740361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.740704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.740711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.699 [2024-07-15 21:20:28.741048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.699 [2024-07-15 21:20:28.741055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.699 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.741381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.741389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.741622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.741630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.741967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.741975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.742345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.742353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.742553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.742561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.742933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.742941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.743147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.743156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.743407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.743415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.743630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.743640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.744012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.744021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.744395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.744403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.744753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.744761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.745118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.745125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.745470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.745478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.745721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.745729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.746057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.746066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.746385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.746394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.746768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.746779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.747038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.747045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.747286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.747293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.747601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.747608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.747997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.748003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.748331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.748338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.748636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.748644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.748839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.748846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.749072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.749081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.749438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.749446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.749772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.749780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.750013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.750020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.750331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.750338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.750671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.750678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.750856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.750863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.751206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.751213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.751541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.751549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.751795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.751802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.752017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.752024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.752303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.752310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.752695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.752702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.753046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.753053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.753385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.700 [2024-07-15 21:20:28.753396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.700 qpair failed and we were unable to recover it. 00:30:01.700 [2024-07-15 21:20:28.754295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.754313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.754647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.754655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.755060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.755068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.755394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.755402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.755756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.755763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.756273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.756284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.756591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.756599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.756937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.756944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.757186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.757195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.757558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.757565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.757902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.757910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.758319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.758328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.758693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.758700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.759007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.759014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.759162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.759169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.759488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.759495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.759743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.759749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.760091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.760098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.760437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.760444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.760803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.760810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.761144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.761150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.761567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.761574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.761887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.761894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.762201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.762208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.762551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.762558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.762869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.762877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.763065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.763073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.763276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.763284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.763633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.763640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.763991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.763997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.764311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.764318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.764518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.764526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.764846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.764853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.765222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.765236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.765528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.765535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.765871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.765877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.766109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.766116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.766433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.766440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.766812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.766819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.767142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.767148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.767478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.701 [2024-07-15 21:20:28.767485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.701 qpair failed and we were unable to recover it. 00:30:01.701 [2024-07-15 21:20:28.767726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.767733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.768066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.768072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.768392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.768400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.768745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.768751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.769081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.769087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.769441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.769447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.769775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.769781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.770113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.770121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.770454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.770461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.770802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.770808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.771036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.771043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.771293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.771300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.771615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.771622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.771819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.771826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.772182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.772190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.772508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.772515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.772830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.772837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.773084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.773091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.773197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.773204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.773522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.773529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.773836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.773842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.774079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.774087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.774405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.774411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.774815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.774822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.775166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.775174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.775524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.775531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.775739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.775746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.776080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.776088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.776422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.776428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.776763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.776774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.777111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.777118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.777454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.777461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.777838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.777844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.778087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.778094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.778524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.778531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.778872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.778879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.779202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.779208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.779556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.779563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.779917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.779923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.780215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.780221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.702 [2024-07-15 21:20:28.780544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.702 [2024-07-15 21:20:28.780552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.702 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.780864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.780870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.781209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.781215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.781623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.781630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.781757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.781763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.782144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.782151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.782558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.782565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.782881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.782890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.783225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.783242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.783432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.783439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.783783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.783791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.784134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.784140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.784474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.784481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.784791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.784798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.785172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.785179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.785428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.785435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.785784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.785790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.786109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.786116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.786385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.786391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.786731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.786737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.787052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.787058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.787408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.787415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.787766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.787772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.788086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.788092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.788444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.788451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.788795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.788801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.789167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.789174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.789500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.789507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.789853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.789860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.790205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.790211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.790600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.790607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.790785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.790792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.791136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.791143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.791478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.791485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.791804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.791811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.792153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.792160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.792526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.792533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.703 [2024-07-15 21:20:28.792879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.703 [2024-07-15 21:20:28.792886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.703 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.793121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.793129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.793443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.793450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.793717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.793723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.794062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.794069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.794432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.794438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.794767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.794773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.795130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.795137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.795481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.795487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.795801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.795807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.796151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.796159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.796493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.796500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.796814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.796820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.796990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.796997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.797381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.797388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.797729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.797735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.798071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.798078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.798404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.798412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.798748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.798755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.799074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.799080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.799444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.799456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.799752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.799759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.800112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.800120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.800469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.800476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.800770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.800777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.801163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.801169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.801499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.801506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.801816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.801823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.802163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.802170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.802494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.802500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.802702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.802709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.802948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.802955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.803276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.803283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.803529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.803536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.803879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.803885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.804118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.804124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.804494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.804501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.804783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.804790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.805132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.805138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.805478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.805484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.805871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.805878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.806235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.806242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.806558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.806565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.806914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.806921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.807239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.704 [2024-07-15 21:20:28.807246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.704 qpair failed and we were unable to recover it. 00:30:01.704 [2024-07-15 21:20:28.807552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.807559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.807872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.807878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.808225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.808237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.808573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.808581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.808766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.808773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.809116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.809124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.809365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.809372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.809733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.809739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.810060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.810066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.810254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.810261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.810603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.810610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.810950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.810957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.811298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.811305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.811645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.811652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.812012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.812019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.812359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.812365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.812699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.812706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.813062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.813068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.813308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.813315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.813655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.813661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.814028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.814035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.814398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.814405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.814722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.814728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.814972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.814979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.815330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.815338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.815527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.815534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.815836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.815843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.816039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.816045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.816387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.816393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.816714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.816720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.817064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.817079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.817466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.817474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.817548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.817555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.817874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.817880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.818192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.818198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.818524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.818531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.818891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.818897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.819208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.819216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.819545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.819552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.819873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.819879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.820241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.820248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.820694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.820700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.821080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.821087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.821497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.705 [2024-07-15 21:20:28.821504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.705 qpair failed and we were unable to recover it. 00:30:01.705 [2024-07-15 21:20:28.821830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.821838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.822195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.822203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.822532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.822539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.822907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.822921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.822996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.823003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.823372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.823379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.823701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.823708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.824063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.824069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.824429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.824436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.824752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.824758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.825103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.825109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.825299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.825306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.825659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.825667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.825872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.825879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.826250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.826292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.826635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.826642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.826992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.827005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.827215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.827223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.827550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.827557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.827920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.827927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.828286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.828293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.828562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.828569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.828923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.828930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.829237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.829244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.829578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.829584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.829908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.829915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.830276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.830284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.830615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.830622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.830994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.831000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.831341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.831348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.831674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.831681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.832011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.832018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.832252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.832260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.832609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.832616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.832847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.832853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.833202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.833209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.833534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.833541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.833900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.833907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.834131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.834138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.834480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.834488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.834821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.834828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.835113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.835121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.835438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.835445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.835775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.835782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.836154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.836162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.836500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.836507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.836856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.706 [2024-07-15 21:20:28.836863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.706 qpair failed and we were unable to recover it. 00:30:01.706 [2024-07-15 21:20:28.837204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.837211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.837409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.837416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.837754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.837761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.838076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.838083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.838432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.838439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.838782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.838789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.839103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.839110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.839463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.839470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.839803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.839810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.840178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.840184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.840363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.840370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.840578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.840586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.840916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.840924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.841293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.841299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.841636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.841643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.841988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.841995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.842315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.842321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.842674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.842681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.842890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.842897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.843270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.843278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.843679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.843685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.844009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.844016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.844367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.844374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.844729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.844735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.844913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.844920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.845253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.845259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.845681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.845688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.845912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.845918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.846286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.846292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.846628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.846635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.846984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.846990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.847395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.847403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.847748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.847755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.848106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.848113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.848339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.848348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.848688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.848694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.848933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.848939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.849302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.849309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.849600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.849607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.849863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.849870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.707 [2024-07-15 21:20:28.850079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.707 [2024-07-15 21:20:28.850085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.707 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.850423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.850430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.850666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.850672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.851067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.851073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.851386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.851393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.851739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.851746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.852070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.852077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.852431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.852437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.852857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.852863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.853181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.853187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.853485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.853492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.853829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.853836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.854196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.854202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.854635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.854641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.854955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.854962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.855280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.855287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.855604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.855611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.855852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.855859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.856177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.856184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.856523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.856530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.856848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.856855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.857219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.857226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.857549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.857555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.857864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.857872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.858238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.858246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.858592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.858599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.858834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.858841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.859216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.859222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.859609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.859616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.860033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.860039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.860284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.860291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.860629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.860636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.860852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.860859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.861225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.861234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.861550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.861559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.861829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.861836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.862073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.862080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.862434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.862441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.862778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.862784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.863125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.863132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.863532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.863538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.863880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.863886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.864206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.864213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.864557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.864563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.864679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.864685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.865035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.865042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.865357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.865365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.865522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.708 [2024-07-15 21:20:28.865529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.708 qpair failed and we were unable to recover it. 00:30:01.708 [2024-07-15 21:20:28.865863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.865869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.866123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.866130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.866478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.866484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.866829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.866836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.867088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.867097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.867452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.867459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.867800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.867806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.868107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.868114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.868455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.868462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.868767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.868774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.869094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.869100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.869436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.869443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.869660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.869667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.870006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.870013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.870234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.870241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.870554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.870560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.870905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.870913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.871206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.871213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.871550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.871557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.871875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.871881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.872215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.872222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.872554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.872561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.872921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.872928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.872970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.872977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.873310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.873318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.873592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.873599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.873793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.873801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.874117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.874123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.874434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.874441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.874762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.874769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.875007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.875013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.875238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.875245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.875466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.875473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.875684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.875690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.875986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.875992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.876250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.876257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.876565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.876571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.876865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.876871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.877069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.877077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.877297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.877304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.877535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.877542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.877732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.877739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.878013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.878019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.878339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.878346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.878500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.878507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.878827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.878833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.879108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.879116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.879288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.709 [2024-07-15 21:20:28.879296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.709 qpair failed and we were unable to recover it. 00:30:01.709 [2024-07-15 21:20:28.879645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.879652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.879970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.879976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.880308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.880316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.880552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.880558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.880735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.880742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.881067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.881075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.881192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.881199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.881521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.881528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.881810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.881816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.882176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.882184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.882559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.882566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.882914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.882921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.883138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.883145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.883479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.883486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.883811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.883819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.884125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.884133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.884438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.884445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.884791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.884798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.885028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.885037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.885344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.885350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.885700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.885706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.886067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.886081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.886425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.886432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.886739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.886746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.887082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.887088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.887401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.887408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.887774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.887780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.888101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.888108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.888434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.888441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.888793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.888800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.888993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.889001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.889317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.889323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.889640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.889646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.889992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.889998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.890280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.890287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.890427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.890435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.890782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.890788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.891109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.891116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.891312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.891319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.891657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.891663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.892041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.892048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.892353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.710 [2024-07-15 21:20:28.892360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.710 qpair failed and we were unable to recover it. 00:30:01.710 [2024-07-15 21:20:28.892682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.892689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.893122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.893129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.893323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.893330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.893687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.893694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.893936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.893943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.894307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.894314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.894532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.894539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.894907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.894913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.895232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.895239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.895546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.895552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.895880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.895886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.896117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.896124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.896203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.896210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.896425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.896433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.896617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.896623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.896816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.896822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.897218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.897224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.897619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.897627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.897970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.897978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.898317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.898324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.898582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.898588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.898792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.898798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.899145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.899152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.899487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.899495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.899686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.899694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.900055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.900062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.900222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.900231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.900496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.900502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.900846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.900852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.901255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.901263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.901574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.901581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.901783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.901789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.902077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.902083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.902443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.902450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.902765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.902771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.903111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.903118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.903344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.903351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.903691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.903698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.904039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.904046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.904469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.904476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.904814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.904820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.905140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.905147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.905499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.905506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.905849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.905857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.906181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.906188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.906531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.906537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.906720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.906727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.907055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.907062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.907307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.907314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.907748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.711 [2024-07-15 21:20:28.907755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.711 qpair failed and we were unable to recover it. 00:30:01.711 [2024-07-15 21:20:28.908016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.908023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.908385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.908392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.908731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.908738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.909047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.909053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.909365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.909372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.909630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.909636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.909741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.909748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.910065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.910072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.910391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.910398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.910697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.910704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.910937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.910943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.911282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.911289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.911634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.911640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.911984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.911991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.912336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.912342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.912698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.912704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.913044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.913051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.913366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.913373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.913702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.913709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.914029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.914037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.914264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.914272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.914632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.914639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.914836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.914843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.915199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.915206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.915477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.915484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.915828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.915834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.916074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.916081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.916408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.916415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.916758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.916765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.917105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.917112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.917288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.917295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.917517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.917525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.917871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.917878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.918193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.918202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.918553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.918561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.918887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.918894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.919237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.919244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.919580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.919586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.919778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.919784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.920156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.920162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.920540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.920546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.920897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.920904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.921265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.921272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.921607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.921614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.921925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.921933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.922147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.922154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.922394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.922400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.922721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.922729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.922923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.922931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.923286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.923293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.923637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.923643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.712 qpair failed and we were unable to recover it. 00:30:01.712 [2024-07-15 21:20:28.923968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.712 [2024-07-15 21:20:28.923975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.924294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.924302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.924649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.924655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.924971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.924978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.925270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.925277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.925573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.925580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.925922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.925929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.926246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.926252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.926574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.926581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.926956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.926963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.927287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.927295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.927596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.927604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.927924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.927930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.928286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.928293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.928371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.928377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.928676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.928683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.929013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.929021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.929233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.929240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.929383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.929390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.929693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.929739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.930060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.930067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.930384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.930392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.930723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.930732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.931047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.931054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.931201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.931209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.931527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.931534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.931851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.931859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.932005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.932013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.932327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.932334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.932681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.932688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.933018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.933025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.933343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.933350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.933705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.933711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.934022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.934028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.934381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.934388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.934728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.934734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.935064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.935071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.935428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.935435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.935776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.935782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.936101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.936108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.936306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.936314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.936624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.936631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.936957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.936963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.937213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.937220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.937565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.937572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.937938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.937945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.938269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.938277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.938598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.938604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.938956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.938963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.939296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.939303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.939654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.939660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.940027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.940034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.713 [2024-07-15 21:20:28.940383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.713 [2024-07-15 21:20:28.940390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.713 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.940643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.940649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.940888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.940895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.941227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.941236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.941557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.941563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.941909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.941915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.942256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.942263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.942587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.942594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.942819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.942826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.943197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.943204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.943589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.943598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.943960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.943967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.944311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.944324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.944664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.944671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.944819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.944826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.945138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.945145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.945578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.945585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.945905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.945912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.946223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.946231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.946623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.946630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.946980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.946987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.947226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.947239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.947579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.947586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.947766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.947773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.947940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.947947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.948298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.948305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.948586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.948593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.948809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.948815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.949142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.949148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.949454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.949461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.949798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.949804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.950114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.950121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.950464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.950470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.950813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.950819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.951156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.951163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.951523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.951530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.951867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.951874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.952193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.952200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.952423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.952430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.952788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.952795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.953071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.953077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.953424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.953431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.953607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.953614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.953965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.953972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.954283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.954290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.954625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.954632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.954976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.954982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.955301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.955309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.955662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.714 [2024-07-15 21:20:28.955669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.714 qpair failed and we were unable to recover it. 00:30:01.714 [2024-07-15 21:20:28.956038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.956046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.956412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.956420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.956787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.956793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.957159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.957166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.957363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.957371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.957736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.957742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.958137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.958143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.958429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.958435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.958691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.958697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.958935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.958942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.959270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.959277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.959597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.959604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.959961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.959976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.960370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.960377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.960732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.960738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.961087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.961093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.961404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.961411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.961710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.961718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.962084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.962090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.962461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.962467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.962728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.962734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.962979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.962987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.963218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.963225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.963585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.963592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.963903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.963911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.964235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.964243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.964578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.964584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.964897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.964904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.965142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.965149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.965508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.965516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.965763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.965769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.966137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.966143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.966368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.966375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.966721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.966728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.967062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.967068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.967397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.967403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.967741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.967748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.968065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.968071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.968409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.968417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.968753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.968760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:01.715 [2024-07-15 21:20:28.969120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.715 [2024-07-15 21:20:28.969127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:01.715 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.969476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.969486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.969833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.969850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.970173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.970180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.970493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.970500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.970847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.970854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.971131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.971138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.971476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.971483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.971781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.971787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.972150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.972157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.972468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.972476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.972846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.972854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.973174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.973181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.973401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.973409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.973742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.973749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.973956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.973963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.974291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.974298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.974599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.974605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.974976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.974982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.975303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.975311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.975546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.975552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.975893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.975900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.976239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.976245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.976563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.976569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.976912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.976918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.977263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.977270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.977590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.977596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.977923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.977929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.978240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.978247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.978522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.978528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.978850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.978856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.979050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.979058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.979393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.979399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.979616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.979623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.979978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.979984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.980306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.980313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.010 qpair failed and we were unable to recover it. 00:30:02.010 [2024-07-15 21:20:28.980643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.010 [2024-07-15 21:20:28.980649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.981012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.981019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.981227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.981239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.981590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.981596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.981920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.981927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.982286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.982295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.982703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.982709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.982983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.982990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.983338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.983345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.983707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.983713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.984075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.984082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.984423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.984430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.984808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.984814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.985132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.985138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.985470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.985477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.985696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.985703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.986037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.986043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.986384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.986391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.986712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.986718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.987045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.987052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.987297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.987304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.987627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.987634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.987973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.987979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.988291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.988298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.988661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.988667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.989024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.989031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.989282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.989288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.989523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.989529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.989857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.989863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.990092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.990099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.990333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.990340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.990675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.990681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.991005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.991012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.991352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.991359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.991701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.991707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.992025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.992032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.011 [2024-07-15 21:20:28.992396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.011 [2024-07-15 21:20:28.992402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.011 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.992577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.992584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.992851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.992857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.993194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.993200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.993532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.993539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.993851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.993858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.994204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.994210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.994549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.994556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.994868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.994874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.995213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.995221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.995568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.995575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.995887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.995893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.996237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.996244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.996599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.996606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.996946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.996953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.997274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.997280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.997637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.997650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.997987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.997993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.998190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.998197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.998556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.998563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.998888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.998894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.999241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.999247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.999609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.999615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:28.999796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:28.999803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:29.000135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:29.000141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:29.000482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:29.000488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:29.000836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:29.000843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:29.001212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:29.001219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:29.001566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:29.001573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:29.001812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:29.001818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:29.002131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:29.002144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:29.002530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:29.002537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:29.002852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:29.002859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:29.003180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:29.003186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:29.003510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:29.003525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:29.003926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:29.003932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:29.004254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:29.004261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:29.004629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:29.004636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:29.004834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:29.004842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:29.005167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:29.005175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:29.005506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.012 [2024-07-15 21:20:29.005513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.012 qpair failed and we were unable to recover it. 00:30:02.012 [2024-07-15 21:20:29.005758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.005764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.006133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.006140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.006488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.006495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.006742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.006749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.007120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.007127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.007472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.007479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.007801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.007808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.008145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.008152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.008492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.008500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.008826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.008833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.009168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.009175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.009523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.009530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.009841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.009847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.010191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.010197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.010280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.010287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.010608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.010614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.010836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.010842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.011212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.011219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.011600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.011607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.011944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.011951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.012191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.012197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.012444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.012451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.012676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.012683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.013021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.013028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.013363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.013370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.013597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.013603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.013804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.013810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.014061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.014068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.014403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.014409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.014660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.014666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.015012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.015018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.015347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.015353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.015672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.015679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.016021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.016027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.016347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.016354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.016722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.016729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.017063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.017070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.017414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.017421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.017819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.017825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.018136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.018142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.013 qpair failed and we were unable to recover it. 00:30:02.013 [2024-07-15 21:20:29.018477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.013 [2024-07-15 21:20:29.018483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.018842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.018848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.019172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.019179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.019469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.019476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.019841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.019848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.020019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.020026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.020355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.020361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.020723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.020729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.021072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.021080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.021399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.021406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.021781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.021788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.022114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.022121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.022366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.022373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.022727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.022734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.023073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.023080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.023446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.023453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.023773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.023779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.024167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.024174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.024517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.024524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.024845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.024851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.025215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.025222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.025578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.025585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.025909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.025915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.026247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.026254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.026510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.026516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.026835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.026842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.027024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.027031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.027265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.027272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.027683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.027690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.028001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.028008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.028318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.028325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.028704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.028711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.029061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.014 [2024-07-15 21:20:29.029068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.014 qpair failed and we were unable to recover it. 00:30:02.014 [2024-07-15 21:20:29.029346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.029353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.029727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.029735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.030019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.030026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.030363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.030370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.030714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.030720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.031062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.031068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.031477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.031484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.031805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.031811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.032146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.032153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.032506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.032513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.032873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.032879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.033205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.033211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.033559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.033565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.033928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.033934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.034174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.034180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.034510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.034518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.034866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.034872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.035100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.035106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.035483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.035490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.035858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.035865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.036221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.036228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.036546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.036553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.036923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.036930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.037172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.037179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.037486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.037492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.037810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.037817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.038151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.038158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.038355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.038363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.038679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.038686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.039048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.039055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.039186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.039193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.039533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.039540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.039783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.039791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.040090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.040098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.040459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.040466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.040784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.040790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.041146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.041159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.041334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.041341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.041703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.041709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.041956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.041963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.042335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.015 [2024-07-15 21:20:29.042341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.015 qpair failed and we were unable to recover it. 00:30:02.015 [2024-07-15 21:20:29.042664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.042671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.043075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.043082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.043402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.043409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.043777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.043790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.043978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.043985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.044341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.044348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.044686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.044693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.045024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.045030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.045267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.045273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.045582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.045589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.045895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.045901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.046220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.046227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.046571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.046577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.046855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.046862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.047223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.047236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.047563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.047570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.047913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.047920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.048285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.048292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.048604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.048611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.048952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.048958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.049276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.049283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.049489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.049495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.049805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.049811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.050054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.050060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.050373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.050380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.050734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.050741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.051059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.051066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.051374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.051380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.051728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.051734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.052052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.052058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.052289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.052296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.052593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.052600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.052944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.052951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.053286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.053293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.053614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.053621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.053966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.053973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.054312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.054319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.054678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.054685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.054888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.054895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.055200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.055206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.055481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.016 [2024-07-15 21:20:29.055487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.016 qpair failed and we were unable to recover it. 00:30:02.016 [2024-07-15 21:20:29.055717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.055724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.056032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.056039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.056241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.056248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.056562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.056570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.056895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.056901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.057228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.057236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.057579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.057585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.057919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.057926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.058241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.058249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.058612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.058619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.058856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.058863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.059178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.059186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.059513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.059520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.059833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.059842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.060087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.060094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.060388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.060395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.060741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.060747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.061061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.061067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.061366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.061373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.061704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.061711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.062020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.062026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.062376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.062383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.062747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.062754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.063082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.063088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.063438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.063444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.063776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.063783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.064018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.064024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.064389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.064396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.064755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.064761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.065111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.065117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.065468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.065475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.065773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.065779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.066083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.066089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.066330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.066337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.066669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.066676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.067036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.067048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.067403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.017 [2024-07-15 21:20:29.067410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.017 qpair failed and we were unable to recover it. 00:30:02.017 [2024-07-15 21:20:29.067722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.067729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.068082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.068095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.068445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.068452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.068764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.068772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.069124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.069139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.069481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.069487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.069805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.069812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.070161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.070177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.070511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.070518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.070836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.070843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.071181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.071187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.071623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.071630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.071948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.071954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.072292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.072299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.072671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.072678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.072995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.073001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.073308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.073315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.073550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.073557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.073853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.073860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.074257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.074263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.074635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.074641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.074942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.074948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.075289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.075295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.075624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.075631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.076041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.076048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.076383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.076390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.076757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.076763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.077069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.077076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.077415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.077421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.077743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.077750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.078091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.078097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.078462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.078468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.078835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.078843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.079053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.079060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 21:20:29.079240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.018 [2024-07-15 21:20:29.079247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.079521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.079535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.079882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.079888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.080204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.080210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.080426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.080433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.080837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.080843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.081082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.081089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.081353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.081360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.081509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.081515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.081746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.081754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.082084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.082091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.082403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.082411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.082806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.082813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.083137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.083144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.083492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.083498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.083810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.083818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.084185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.084192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.084406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.084414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.084803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.084810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.085171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.085179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.085454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.085461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.085878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.085885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.086197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.086204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.086466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.086473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.086829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.086836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.087153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.087160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.087461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.087469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.087810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.087817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.088136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.088144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.088478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.088486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.088842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.088849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.089088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.089096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.089445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.089452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.089636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.089644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.089869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.089875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.090110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.090116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.090463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.090470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.090793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.090799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.091157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.091164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.091500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.091506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.091826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.019 [2024-07-15 21:20:29.091833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 21:20:29.092173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.092179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.092377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.092385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.092744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.092750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.093063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.093069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.093380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.093386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.093598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.093605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.093938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.093944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.094256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.094263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.094591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.094599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.094922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.094929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.095268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.095275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.095639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.095645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.095963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.095969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.096203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.096209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.096559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.096566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.096884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.096890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.097235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.097241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.097593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.097600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.097848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.097854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.098189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.098195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.098520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.098527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.098871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.098877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.099188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.099196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.099613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.099620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.099931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.099938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.100245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.100252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.100466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.100473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.100847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.100854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.101169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.101175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.101442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.101449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.101744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.101750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.101926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.101932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.102309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.102315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.102693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.102701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.102977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.102984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.103301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.103308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.103649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.020 [2024-07-15 21:20:29.103655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.020 qpair failed and we were unable to recover it. 00:30:02.020 [2024-07-15 21:20:29.103954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.103961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.104322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.104329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.104661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.104667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.105023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.105029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.105396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.105403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.105553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.105559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.105925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.105932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.106269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.106276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.106598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.106605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.106931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.106938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.107291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.107297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.107531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.107540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.107858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.107864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.108183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.108190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.108531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.108538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.108856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.108863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.109233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.109241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.109592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.109599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.109912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.109919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.110283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.110289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.110604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.110611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.110964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.110970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.111106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.111112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.111440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.111446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.111659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.111666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.112032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.112038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.112364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.112371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.112728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.112735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.113057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.113063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.113414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.113420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.113702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.113708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.114069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.114075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.114317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.114323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.114647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.114655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.115023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.115029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.115355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.115362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.115629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.115636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.116000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.116007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.116246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.116254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.116588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.116595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.116914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.021 [2024-07-15 21:20:29.116921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.021 qpair failed and we were unable to recover it. 00:30:02.021 [2024-07-15 21:20:29.117120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.117127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.117445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.117452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.117805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.117811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.118124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.118131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.118467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.118474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.118817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.118823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.119122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.119128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.119476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.119482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.119820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.119827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.120190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.120197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.120564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.120572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.120892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.120899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.121310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.121316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.121657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.121664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.121957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.121964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.122300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.122307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.122610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.122617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.122963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.122970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.123251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.123258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.123470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.123476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.123851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.123857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.124076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.124083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.124391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.124398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.124742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.124748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.125064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.125070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.125420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.125427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.125754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.125761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.126046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.126053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.126336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.126343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.126708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.126715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.126974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.126980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.127300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.127307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.127658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.127665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.128011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.128018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.128348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.128355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.128681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.128687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.128889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.128895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.022 [2024-07-15 21:20:29.129236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.022 [2024-07-15 21:20:29.129242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.022 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.129543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.129549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.129913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.129920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.130169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.130176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.130545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.130551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.130905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.130912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.131240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.131247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.131613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.131619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.131855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.131861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.132227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.132238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.132498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.132504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.132794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.132801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.133139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.133145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.133335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.133344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.133716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.133723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.133959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.133965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.134300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.134307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.134650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.134657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.134848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.134854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.135040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.135047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.135333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.135340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.135679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.135686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.136024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.136030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.136356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.136362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.136710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.136716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.137045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.137051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.137397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.137404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.137633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.137640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.137867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.137873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.138196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.138203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.138532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.138539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.138879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.138886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.139199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.139205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.139560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.023 [2024-07-15 21:20:29.139566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.023 qpair failed and we were unable to recover it. 00:30:02.023 [2024-07-15 21:20:29.139900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.139907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.140245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.140252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.140481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.140488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.140664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.140670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.141062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.141069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.141384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.141391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.141701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.141707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.141948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.141955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.142305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.142311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.142640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.142646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.143005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.143012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.143267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.143273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.143623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.143629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.143963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.143969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.144292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.144298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.144491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.144498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.144859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.144866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.145189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.145196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.145245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.145252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.145572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.145581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.145976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.145983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.146191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.146204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.146541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.146547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.146871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.146878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.147236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.147244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.147592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.147598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.147803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.147810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.148138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.148144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.148365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.148372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.148543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.148550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.148913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.148919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.149258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.149265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.149632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.149638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.149953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.149960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.150110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.150118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.150427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.150434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.150678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.150685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.024 [2024-07-15 21:20:29.150903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-07-15 21:20:29.150910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.024 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.151202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.151209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.151469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.151475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.151805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.151812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.152037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.152044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.152385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.152392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.152709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.152716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.152961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.152967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.153303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.153310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.153604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.153611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.153822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.153829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.154167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.154173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.154408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.154415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.154720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.154728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.155077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.155083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.155441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.155448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.155818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.155825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.156052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.156059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.156275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.156282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.156640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.156648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.157063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.157071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.157401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.157407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.157643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.157651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.157989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.157995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.158245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.158252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.158351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.158358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.158691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.158699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.158991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.158997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.159326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.159333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.159646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.159653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.159966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.159972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.160253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.160261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.160606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.160612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.160933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.160939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.161158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.161164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.161380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.161387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.161710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.161716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.162058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.162066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.162406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.162413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.162738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.162744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.163072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.163079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.025 qpair failed and we were unable to recover it. 00:30:02.025 [2024-07-15 21:20:29.163285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-07-15 21:20:29.163292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.163635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.163641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.163859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.163865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.164092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.164098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.164407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.164414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.164737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.164743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.165038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.165046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.165256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.165263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.165614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.165620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.165939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.165946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.166273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.166280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.166609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.166616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.166973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.166979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.167300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.167308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.167637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.167644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.167997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.168005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.168226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.168237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.168590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.168597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.168951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.168958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.169334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.169341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.169568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.169575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.169921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.169931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.170148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.170156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.170330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.170338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.170708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.170715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.171082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.171089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.171431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.171438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.171663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.171670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.172001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.172008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.172225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.172236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.172576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.172582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.172898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.172905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.173266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.173273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.173581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.173587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.173764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.173771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.174106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.174113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.174465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.174471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.174795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.174802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.175027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.175034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.175342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.175350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.175604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-07-15 21:20:29.175611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.026 qpair failed and we were unable to recover it. 00:30:02.026 [2024-07-15 21:20:29.175810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.175818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.176124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.176132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.176470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.176477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.176711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.176718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.177021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.177035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.177399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.177405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.177752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.177759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.178056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.178063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.178313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.178319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.178662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.178669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.178915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.178921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.179161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.179167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.179565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.179572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.179897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.179903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.180205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.180212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.180544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.180551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.180769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.180776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.181060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.181066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.181420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.181427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.181778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.181785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.181971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.181979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.182316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.182323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.182538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.182545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.182940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.182947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.183196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.183203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.183543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.183550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.183859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.183866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.184236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.184242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.184558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.184564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.184872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.184879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.185244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.185251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.185582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.185588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.185902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.185908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.186265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.186272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.027 [2024-07-15 21:20:29.186620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.027 [2024-07-15 21:20:29.186626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.027 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.186953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.186959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.187167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.187174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.187532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.187538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.187857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.187863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.188236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.188242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.188581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.188588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.188929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.188936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.189302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.189309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.189643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.189650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.190017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.190023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.190371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.190378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.190561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.190567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.190823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.190831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.191152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.191159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.191382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.191388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.191729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.191735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.192052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.192058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.192373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.192380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.192621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.192627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.192951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.192957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.193294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.193301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.193584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.193591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.193909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.193916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.194250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.194258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.194633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.194640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.194961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.194969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.195262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.195269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.195616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.195624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.195965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.195971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.196297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.196303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.196656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.196670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.196900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.196907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.197239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.197246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.197623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.028 [2024-07-15 21:20:29.197630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.028 qpair failed and we were unable to recover it. 00:30:02.028 [2024-07-15 21:20:29.197874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.197881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.198044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.198051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.198249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.198255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.198655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.198661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.198973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.198980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.199345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.199351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.199591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.199597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.199936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.199942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.200260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.200267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.200608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.200614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.200929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.200935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.201289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.201295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.201600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.201606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.201916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.201923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.202147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.202154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.202494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.202501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.202834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.202841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.203088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.203095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.203434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.203440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.203768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.203774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.204119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.204125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.204475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.204481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.204804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.204810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.205170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.205177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.205489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.205496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.205752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.205758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.206084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.206090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.206431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.206437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.206748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.206755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.207101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.207107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.207436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.207443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.207769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.207778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.208058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.208065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.208384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.208391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.208727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.208733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.209050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.209056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.209402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.209409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.209678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.209684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.210071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.210077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.210392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.029 [2024-07-15 21:20:29.210399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.029 qpair failed and we were unable to recover it. 00:30:02.029 [2024-07-15 21:20:29.210749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.210756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.211113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.211120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.211461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.211467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.211764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.211771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.212094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.212100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.212421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.212428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.212770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.212777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.213008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.213014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.213357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.213364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.213684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.213691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.214051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.214057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.214397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.214404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.214733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.214739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.215142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.215149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.215463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.215470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.215751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.215757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.215956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.215963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.216338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.216345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.216657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.216665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.217031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.217037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.217319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.217325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.217665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.217671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.217990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.217996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.218346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.218353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.218694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.218700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.219057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.219064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.219426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.219433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.219769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.219775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.220023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.220029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.220266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.220273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.220619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.220625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.220948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.220956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.221152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.221159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.221491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.221498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.221864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.221872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.222238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.222245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.030 [2024-07-15 21:20:29.222558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.030 [2024-07-15 21:20:29.222565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.030 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.222921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.222928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.223241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.223248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.223597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.223603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.223923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.223929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.224167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.224173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.224509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.224516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.224868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.224875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.225112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.225119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.225438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.225446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.225840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.225847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.226166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.226172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.226529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.226537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.226911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.226918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.227260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.227267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.227504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.227511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.227834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.227841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.228209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.228215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.228541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.228548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.228868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.228874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.229192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.229199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.229559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.229566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.229883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.229891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.230233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.230240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.230561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.230567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.230924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.230930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.231264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.231271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.231516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.231523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.231830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.231836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.232146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.232153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.232481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.232488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.232808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.232815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.233176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.233183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.233497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.233504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.233863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.233871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.234207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.234214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.234579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.234585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.234905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.234912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.235302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.235308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.235651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.235657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.235946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.031 [2024-07-15 21:20:29.235953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.031 qpair failed and we were unable to recover it. 00:30:02.031 [2024-07-15 21:20:29.236270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.236276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.236510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.236517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.236758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.236765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.236911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.236917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.237223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.237232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.237592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.237599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.237935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.237942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.238297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.238304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.238599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.238606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.238925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.238933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.239284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.239291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.239616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.239623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.239838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.239845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.240228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.240237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.240552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.240558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.240776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.240782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.241127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.241134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.241397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.241404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.241733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.241739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.242071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.242077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.242450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.242457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.242774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.242783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.243129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.243136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.243493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.243500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.243744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.243751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.244078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.244085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.244457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.244463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.244640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.244646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.244991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.244997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.245232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.245239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.245559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.245565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.245877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.245883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.246266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.246274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.246599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.246605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.032 [2024-07-15 21:20:29.246921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.032 [2024-07-15 21:20:29.246928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.032 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.247311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.247318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.247660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.247666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.247981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.247988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.248339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.248345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.248564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.248572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.248906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.248913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.249237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.249245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.249559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.249565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.249892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.249898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.250141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.250148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.250540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.250547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.250865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.250872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.251271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.251278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.251603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.251610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.251970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.251976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.252335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.252342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.252663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.252670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.252911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.252918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.253270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.253276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.253614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.253620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.253856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.253862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.254184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.254191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.254517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.254524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.254834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.254841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.255184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.255190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.255507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.255513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.255747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.255755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.256015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.256021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.256371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.256378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.256595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.256601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.256907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.256913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.257130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.257137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.257449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.257455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.257812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.257819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.258169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.258176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.258519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.258525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.258846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.258853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.259177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.259184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.259525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.259531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.259764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.259770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-07-15 21:20:29.260114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.033 [2024-07-15 21:20:29.260121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.260483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.260491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.260855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.260862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.261199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.261206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.261362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.261369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.261540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.261547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.261620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.261628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.262020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.262027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.262378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.262385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.262709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.262715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.263076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.263082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.263416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.263422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.263760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.263767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.264004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.264011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.264266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.264273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.264618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.264624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.265003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.265010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.265338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.265345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.265564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.265570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.265939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.265946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.266309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.266316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.266649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.266655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.266902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.266909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.267113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.267119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.267421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.267428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.267797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.267803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.268165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.268173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.268501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.268507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.268724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.268731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-07-15 21:20:29.269042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.034 [2024-07-15 21:20:29.269049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.345 [2024-07-15 21:20:29.269439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.345 [2024-07-15 21:20:29.269447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.345 qpair failed and we were unable to recover it. 00:30:02.345 [2024-07-15 21:20:29.269676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.345 [2024-07-15 21:20:29.269683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.345 qpair failed and we were unable to recover it. 00:30:02.345 [2024-07-15 21:20:29.270043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.345 [2024-07-15 21:20:29.270050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.345 qpair failed and we were unable to recover it. 00:30:02.345 [2024-07-15 21:20:29.270292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.345 [2024-07-15 21:20:29.270299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.345 qpair failed and we were unable to recover it. 00:30:02.345 [2024-07-15 21:20:29.270593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.345 [2024-07-15 21:20:29.270600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.345 qpair failed and we were unable to recover it. 00:30:02.345 [2024-07-15 21:20:29.271004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.345 [2024-07-15 21:20:29.271011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.345 qpair failed and we were unable to recover it. 00:30:02.345 [2024-07-15 21:20:29.271376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.345 [2024-07-15 21:20:29.271383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.345 qpair failed and we were unable to recover it. 00:30:02.345 [2024-07-15 21:20:29.271718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.345 [2024-07-15 21:20:29.271725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.345 qpair failed and we were unable to recover it. 00:30:02.345 [2024-07-15 21:20:29.272080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.345 [2024-07-15 21:20:29.272086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.345 qpair failed and we were unable to recover it. 00:30:02.345 [2024-07-15 21:20:29.272431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.345 [2024-07-15 21:20:29.272438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.345 qpair failed and we were unable to recover it. 00:30:02.345 [2024-07-15 21:20:29.272756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.345 [2024-07-15 21:20:29.272763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.345 qpair failed and we were unable to recover it. 00:30:02.345 [2024-07-15 21:20:29.272992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.345 [2024-07-15 21:20:29.272998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.345 qpair failed and we were unable to recover it. 00:30:02.345 [2024-07-15 21:20:29.273378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.345 [2024-07-15 21:20:29.273385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.345 qpair failed and we were unable to recover it. 00:30:02.345 [2024-07-15 21:20:29.273732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.345 [2024-07-15 21:20:29.273740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.345 qpair failed and we were unable to recover it. 00:30:02.345 [2024-07-15 21:20:29.274029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.345 [2024-07-15 21:20:29.274035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.345 qpair failed and we were unable to recover it. 00:30:02.345 [2024-07-15 21:20:29.274348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.345 [2024-07-15 21:20:29.274355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.345 qpair failed and we were unable to recover it. 00:30:02.345 [2024-07-15 21:20:29.274689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.345 [2024-07-15 21:20:29.274695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.345 qpair failed and we were unable to recover it. 00:30:02.345 [2024-07-15 21:20:29.275015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.345 [2024-07-15 21:20:29.275021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.345 qpair failed and we were unable to recover it. 00:30:02.345 [2024-07-15 21:20:29.275413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.345 [2024-07-15 21:20:29.275421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.345 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.275765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.275771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.276089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.276095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.276436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.276443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.276846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.276852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.277184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.277190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.277536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.277543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.277879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.277885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.278194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.278201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.278403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.278409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.278759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.278766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.279104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.279111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.279446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.279452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.279804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.279810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.280134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.280141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.280511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.280518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.280813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.280820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.281060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.281066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.281427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.281435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.281799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.281807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.282132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.282139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.282479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.282485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.282822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.282828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.283162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.283168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.283501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.283508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.283871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.283877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.284193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.284200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.284532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.284539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.284865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.284871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.285193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.285199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.285539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.285547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.285871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.285878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.286167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.286174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.286501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.286508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.286881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.286888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.287222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.287234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.287564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.287570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.287802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.287809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.288155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.288162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.288484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.288491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.288825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.346 [2024-07-15 21:20:29.288831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.346 qpair failed and we were unable to recover it. 00:30:02.346 [2024-07-15 21:20:29.289181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.289188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.289534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.289541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.289697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.289705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.290054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.290060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.290419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.290426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.290782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.290788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.291135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.291142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.291312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.291319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.291552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.291558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.291775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.291781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.292125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.292131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.292304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.292311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.292550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.292556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.292849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.292855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.293032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.293039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.293416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.293422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.293778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.293785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.294154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.294162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.294473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.294480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.294755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.294762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.295090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.295096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.295438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.295446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.295784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.295791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.296150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.296157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.296499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.296506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.296822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.296829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.297193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.297201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.297540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.297547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.297885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.297892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.298134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.298141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.298343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.298351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.298580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.298587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.298929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.298936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.299275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.299282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.299627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.299633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.299993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.299999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.300324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.300331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.300686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.300693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.300916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.300922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.301131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.301138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.347 [2024-07-15 21:20:29.301350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.347 [2024-07-15 21:20:29.301358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.347 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.301680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.301687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.302016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.302023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.302363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.302370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.302785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.302792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.303105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.303113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.303395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.303401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.303788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.303795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.304114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.304120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.304464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.304471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.304657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.304664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.304989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.304996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.305340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.305347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.305544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.305551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.305935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.305942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.306180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.306187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.306505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.306512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.306829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.306838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.307191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.307198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.307538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.307545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.307910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.307917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.308194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.308201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.308577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.308584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.308808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.308815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.309155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.309162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.309531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.309538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.309868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.309876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.310106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.310114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.310366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.310373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.310714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.310721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.311056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.311063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.311369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.311375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.311708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.311715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.312063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.312070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.312438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.312444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.312763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.312769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.313126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.313140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.313480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.313487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.313821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.313828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.314177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.314192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.314443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.348 [2024-07-15 21:20:29.314450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.348 qpair failed and we were unable to recover it. 00:30:02.348 [2024-07-15 21:20:29.314645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.314652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.314987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.314993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.315355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.315362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.315705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.315712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.316042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.316049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.316370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.316377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.316589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.316595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.317004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.317010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.317351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.317357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.317696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.317703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.318063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.318070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.318436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.318443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.318667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.318673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.318891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.318898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.319261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.319268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.319577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.319583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.319828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.319836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.320168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.320174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.320517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.320523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.320838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.320846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.321207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.321214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.321529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.321536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.321846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.321853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.322220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.322227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.322543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.322550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.322916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.322923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.323295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.323302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.323617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.323623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.323963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.323969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.324294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.324301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.349 [2024-07-15 21:20:29.324606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.349 [2024-07-15 21:20:29.324613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.349 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.324815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.324821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.325148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.325155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.325504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.325510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.325869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.325876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.326247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.326255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.326485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.326492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.326811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.326817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.327036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.327042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.327221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.327228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.327574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.327581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.327901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.327908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.328260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.328268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.328606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.328613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.328967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.328973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.329302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.329310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.329600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.329607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.329936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.329942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.330299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.330305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.330486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.330493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.330853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.330860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.331202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.331208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.331553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.331559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.331904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.331910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.332254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.332260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.332604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.332610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.332974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.332983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.333165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.333173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.333502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.333509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.333833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.333840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.334185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.334192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.334533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.334541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.334900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.334907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.335242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.335250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.335574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.335580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.335906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.335912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.336268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.336275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.336635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.336641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.336962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.336968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.337322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.337328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.337664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.337670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.350 [2024-07-15 21:20:29.337990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.350 [2024-07-15 21:20:29.337997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.350 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.338324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.338330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.338525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.338532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.338857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.338864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.339168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.339175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.339588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.339594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.339917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.339924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.340257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.340264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.340597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.340603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.340948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.340954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.341292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.341299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.341637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.341645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.342001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.342007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.342294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.342306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.342634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.342641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.342964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.342971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.343290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.343297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.343553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.343560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.343871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.343877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.344209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.344215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.344553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.344560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.344924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.344930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.345212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.345219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.345519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.345526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.345878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.345884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.346225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.346236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.346432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.346439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.346729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.346736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.346958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.346964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.347308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.347314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.347608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.347614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.347787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.347794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.348162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.348168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.348555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.348562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.348889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.348896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.349236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.349242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.349557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.349563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.349803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.349810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.350060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.350068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.350417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.350424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.350755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.350761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.351 qpair failed and we were unable to recover it. 00:30:02.351 [2024-07-15 21:20:29.351041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.351 [2024-07-15 21:20:29.351048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.351360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.351366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.351579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.351585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.351975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.351981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.352348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.352356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.352651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.352658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.352989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.352996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.353208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.353214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.353555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.353561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.353858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.353865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.354058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.354066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.354374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.354381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.354732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.354739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.354950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.354957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.355320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.355328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.355710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.355716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.355914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.355921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.356218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.356224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.356558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.356565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.356922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.356928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.357248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.357254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.357498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.357504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.357833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.357839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.358168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.358174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.358488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.358496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.358735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.358742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.359095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.359110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.359340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.359355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.359632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.359638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.359975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.359981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.360293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.360300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.360594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.360600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.360859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.360865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.361203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.361210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.361572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.361578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.361871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.361878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.362086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.362094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.362443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.362450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.362761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.362768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.363079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.363085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.363329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.352 [2024-07-15 21:20:29.363336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.352 qpair failed and we were unable to recover it. 00:30:02.352 [2024-07-15 21:20:29.363627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.363634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.363921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.363927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.364282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.364289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.364635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.364649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.364987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.364993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.365339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.365346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.365690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.365696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.366059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.366066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.366414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.366422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.366794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.366800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.367147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.367154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.367492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.367499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.367725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.367731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.367991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.367997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.368338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.368345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.368688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.368694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.369019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.369025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.369399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.369405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.369489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.369495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.369801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.369807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.370033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.370039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.370370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.370377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.370691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.370698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.370906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.370914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.371292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.371299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.371623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.371639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.371977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.371983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.372328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.372335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.372499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.372506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.372840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.372846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.373176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.373183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.373564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.373571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.373745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.373752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.374144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.374151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.374475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.374482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.374702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.374709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.375048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.375055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.375424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.375431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.375744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.375750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.376111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.376125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.376364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.353 [2024-07-15 21:20:29.376371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.353 qpair failed and we were unable to recover it. 00:30:02.353 [2024-07-15 21:20:29.376698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.376706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.377066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.377072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.377385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.377393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.377695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.377701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.378002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.378008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.378362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.378369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.378686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.378693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.379021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.379028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.379367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.379374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.379796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.379804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.380177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.380183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.380408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.380415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.380614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.380620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.380941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.380949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.381126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.381134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.381481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.381488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.381808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.381815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.382157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.382164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.382510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.382517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.382881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.382887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.383220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.383226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.383549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.383556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.383761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.383769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.384103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.384110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.384437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.384444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.384762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.384769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.385109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.385115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.385323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.385330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.385556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.385563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.385783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.385790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.386122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.386129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.386273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.386279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.386502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.386508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.386799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.386805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.387092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.387098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.354 qpair failed and we were unable to recover it. 00:30:02.354 [2024-07-15 21:20:29.387490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.354 [2024-07-15 21:20:29.387497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.387805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.387812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.388202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.388209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.388534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.388541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.388917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.388924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.389251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.389258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.389683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.389689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.390024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.390031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.390372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.390378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.390624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.390630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.390853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.390859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.391194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.391201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.391537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.391544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.391865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.391871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.392245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.392253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.392593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.392600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.392963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.392970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.393299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.393306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2165248 Killed "${NVMF_APP[@]}" "$@" 00:30:02.355 [2024-07-15 21:20:29.393626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.393633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.393999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.394006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.394215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.394221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 21:20:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:02.355 [2024-07-15 21:20:29.394565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.394572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 21:20:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:02.355 [2024-07-15 21:20:29.394927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.394935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 21:20:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:02.355 21:20:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:02.355 [2024-07-15 21:20:29.395274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.395284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.395380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.395388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 21:20:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:02.355 [2024-07-15 21:20:29.395764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.395772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.396065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.396073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.396416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.396423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.396766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.396773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.397146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.397153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.397470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.397477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.397703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.397709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.398055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.398062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.398355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.398362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.398739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.398746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.399045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.399052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.399410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.399417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.399733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.399740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.355 [2024-07-15 21:20:29.400047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.355 [2024-07-15 21:20:29.400057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.355 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.400395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.400402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.400579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.400586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.400939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.400945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.401267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.401274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.401583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.401590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.401923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.401931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.402273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.402280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.402606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.402612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.402833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.402840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 21:20:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2166281 00:30:02.356 [2024-07-15 21:20:29.403225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.403237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 21:20:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2166281 00:30:02.356 [2024-07-15 21:20:29.403565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.403573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 21:20:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:02.356 21:20:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2166281 ']' 00:30:02.356 [2024-07-15 21:20:29.403925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 21:20:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.356 [2024-07-15 21:20:29.403943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 21:20:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:02.356 [2024-07-15 21:20:29.404193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.404200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 21:20:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.356 [2024-07-15 21:20:29.404538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.404546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 21:20:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:02.356 21:20:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:02.356 [2024-07-15 21:20:29.404914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.404922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.405284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.405292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.405590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.405597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.405975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.405982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.406334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.406341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.406715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.406722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.407049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.407056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.407399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.407406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.407718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.407725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.408028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.408034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.408288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.408295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.408524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.408532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.408776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.408783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.409118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.409126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.409487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.409494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.409685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.409693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.409989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.409996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.410386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.410394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.410730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.410737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.411081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.411089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.411332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.411340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.411686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.411694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.356 qpair failed and we were unable to recover it. 00:30:02.356 [2024-07-15 21:20:29.412026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.356 [2024-07-15 21:20:29.412035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.412383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.412391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.412729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.412736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.412971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.412979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.413188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.413196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.413411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.413419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.413752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.413760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.414094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.414101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.414310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.414318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.414654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.414662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.414975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.414982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.415322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.415330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.415669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.415679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.416009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.416016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.416344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.416352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.416401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.416408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.416636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.416644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.416755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.416763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.417098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.417106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.417367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.417374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.417584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.417592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.417792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.417799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.418150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.418158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.418383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.418391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.418582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.418589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.418934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.418942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.419311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.419318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.419639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.419647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.420034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.420042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.420382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.420390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.420722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.420730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.421070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.421077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.421342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.421350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.421734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.421743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.422087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.422094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.422312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.422320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.422663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.422671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.422883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.422891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.423187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.423196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.423539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.423547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.423763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.423771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.357 qpair failed and we were unable to recover it. 00:30:02.357 [2024-07-15 21:20:29.424002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.357 [2024-07-15 21:20:29.424009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.424201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.424209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.424519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.424526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.424838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.424844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.425194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.425201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.425538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.425545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.425872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.425879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.426206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.426213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.426541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.426548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.426830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.426837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.427154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.427161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.427463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.427471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.427790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.427797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.428139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.428145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.428481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.428488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.428879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.428886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.429296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.429303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.429606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.429612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.429839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.429846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.430195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.430202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.430545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.430552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.430939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.430946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.431299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.431306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.431664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.431670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.432021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.432029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.432460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.432468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.432776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.432783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.433111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.433118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.433409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.433417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.433741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.433749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.434078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.434085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.434423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.434431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.434670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.434677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.435041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.435083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.358 qpair failed and we were unable to recover it. 00:30:02.358 [2024-07-15 21:20:29.435269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.358 [2024-07-15 21:20:29.435276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.435669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.435676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.435934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.435940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.436256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.436264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.436562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.436568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.436886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.436893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.437247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.437255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.437602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.437609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.437842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.437849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.438136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.438143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.438484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.438491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.438724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.438730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.439063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.439070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.439399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.439406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.439747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.439754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.439826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.439833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.440159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.440167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.440341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.440350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.440671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.440677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.440906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.440912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.441254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.441262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.441607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.441613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.441811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.441818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.442179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.442186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.442442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.442449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.442792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.442798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.443115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.443121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.443452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.443459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.443789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.443796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.444027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.444034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.444164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.444171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.444399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.444406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.444733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.444740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.445094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.445101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.445419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.445426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.445775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.445781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.359 [2024-07-15 21:20:29.446109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.359 [2024-07-15 21:20:29.446116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.359 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.446445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.446452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.446799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.446806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.447144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.447150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.447478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.447484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.447812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.447818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.448140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.448146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.448472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.448479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.448827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.448833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.449030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.449037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.449495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.449502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.449820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.449827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.450170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.450177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.450519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.450526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.450887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.450893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.451048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.451054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.451268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.451275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.451637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.451643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.451954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.451961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.452276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.452283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.452633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.452640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.452650] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:30:02.360 [2024-07-15 21:20:29.452700] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:02.360 [2024-07-15 21:20:29.452959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.452968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.453330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.453337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.453566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.453573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.453791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.453799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.454129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.454137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.454477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.454484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.454724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.454731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.455062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.455070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.455321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.455328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.455640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.455648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.455996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.456004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.456365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.456373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.456727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.456736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.457076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.457084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.457435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.457442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.457810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.457818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.458210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.360 [2024-07-15 21:20:29.458217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.360 qpair failed and we were unable to recover it. 00:30:02.360 [2024-07-15 21:20:29.458561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.458569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.458932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.458940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.459185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.459192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.459490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.459498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.459704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.459711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.460116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.460124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.460474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.460482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.460843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.460850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.461183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.461190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.461537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.461544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.461908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.461915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.462119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.462127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.462470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.462478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.462798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.462806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.463145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.463152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.463362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.463370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.463755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.463762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.464139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.464146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.464505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.464513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.464881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.464889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.465235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.465243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.465593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.465601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.465946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.465955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.466324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.466332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.466702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.466709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.467035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.467043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.467411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.467419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.467727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.467734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.467966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.467974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.468307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.468315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.468727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.468733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.469079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.469086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.469343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.469350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.469684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.469691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.469995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.470001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.470149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.361 [2024-07-15 21:20:29.470157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.361 qpair failed and we were unable to recover it. 00:30:02.361 [2024-07-15 21:20:29.470505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.470512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.470866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.470873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.471224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.471235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.471557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.471564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.471739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.471746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.472080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.472087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.472335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.472342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.472657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.472664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.473004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.473011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.473359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.473367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.473547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.473554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.473890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.473898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.474239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.474246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.474605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.474612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.474785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.474792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.475015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.475022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.475390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.475397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.475768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.475774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.476073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.476079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.476429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.476436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.476695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.476701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.477040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.477047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.477365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.477371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.477689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.477696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.478011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.478018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.478383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.478390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.478709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.478718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.479024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.479031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.479350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.479357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.479698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.479705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.480026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.480034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.480368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.480375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.480667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.480674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.480908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.480915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.481224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.481233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.362 qpair failed and we were unable to recover it. 00:30:02.362 [2024-07-15 21:20:29.481588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.362 [2024-07-15 21:20:29.481594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.481994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.482000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.482321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.482328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.482635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.482641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.482984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.482991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.483309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.483316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.483651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.483657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.484012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.484027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.484212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.484220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.484588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.484604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.484915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.484921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.485245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.485252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.485554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.485561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.485899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.485906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.486226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.486237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.486572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.486579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.486821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.486827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.487173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.487179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.487539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.487546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.487843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.487850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.488179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.488186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.488533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.488540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.488859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.488866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.489233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.489241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.489430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.489438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.489745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.489751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.490111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.490117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.490354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.490361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.363 qpair failed and we were unable to recover it. 00:30:02.363 [2024-07-15 21:20:29.490720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.363 [2024-07-15 21:20:29.490726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.491045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.491053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.491399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.491406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.491690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.491699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.492020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.492026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.492224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.492238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.492487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.492493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.492843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.492850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.364 [2024-07-15 21:20:29.493198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.493205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.493525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.493532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.493862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.493869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.494210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.494216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.494544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.494551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.494896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.494903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.495246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.495253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.495592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.495601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.495843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.495852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.496101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.496109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.496455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.496463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.496790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.496796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.497090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.497096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.497353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.497362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.497585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.497592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.497931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.497938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.498164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.498171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.498495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.498502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.498781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.498788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.499119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.499125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.499305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.499313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.499656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.499662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.499991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.499997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.500347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.500353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.500745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.500752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.501081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.501088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.501447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.501454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.501806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.501814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.502153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.502160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.364 qpair failed and we were unable to recover it. 00:30:02.364 [2024-07-15 21:20:29.502523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.364 [2024-07-15 21:20:29.502529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.502842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.502848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.503200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.503214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.503574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.503581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.503892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.503899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.504239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.504247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.504584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.504591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.504930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.504937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.505252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.505259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.505489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.505496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.505933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.505940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.506217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.506234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.506585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.506593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.506906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.506913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.507269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.507276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.507612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.507618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.507868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.507874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.508104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.508110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.508473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.508480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.508826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.508835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.509197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.509205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.509502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.509509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.509696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.509703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.510003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.510010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.510292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.510300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.510615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.510623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.510872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.510879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.511243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.511249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.511611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.511618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.511965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.511972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.512295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.512303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.512659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.512666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.512986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.512992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.513334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.513341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.365 qpair failed and we were unable to recover it. 00:30:02.365 [2024-07-15 21:20:29.513586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.365 [2024-07-15 21:20:29.513593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.513915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.513922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.514291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.514298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.514665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.514671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.515045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.515052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.515377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.515384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.515723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.515730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.516065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.516072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.516396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.516403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.516635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.516641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.516813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.516820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.517169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.517175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.517350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.517357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.517709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.517716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.518026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.518040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.518386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.518393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.518753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.518760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.519106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.519113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.519475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.519482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.519803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.519809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.520153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.520159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.520525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.520532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.520845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.520852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.521041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.521049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.521387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.521395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.521732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.521741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.522105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.522111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.522398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.522406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.522756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.522762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.523080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.523086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.523442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.523449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.523789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.523796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.524125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.524132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.524219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.524226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.524593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.524601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.524937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.366 [2024-07-15 21:20:29.524944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.366 qpair failed and we were unable to recover it. 00:30:02.366 [2024-07-15 21:20:29.525253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.525260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.525611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.525618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.525929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.525935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.526280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.526287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.526633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.526639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.526958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.526966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.527339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.527345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.527685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.527694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.528030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.528036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.528269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.528276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.528556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.528563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.528880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.528886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.529225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.529236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.529583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.529590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.529905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.529912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.530274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.530281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.530611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.530618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.530974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.530987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.531347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.531355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.531691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.531697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.532021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.532028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.532370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.532377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.532570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.532577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.532875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.532882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.533249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.533256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.533584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.533591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.533899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.533906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.534226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.534243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.534568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.534574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.534737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.534745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.535111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.535118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.535479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.535486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.535799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.535805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.536133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.536140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.367 [2024-07-15 21:20:29.536476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.367 [2024-07-15 21:20:29.536483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.367 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.536796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.536803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.537039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.537046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.537253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.537260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.537604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.537611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.537933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.537939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.538301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.538345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.538643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.538650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.538898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.538904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.539241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.539249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.539568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.539575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.539809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.539816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.540149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.540156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.540516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.540530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.540883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.540890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.541215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.541222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.541597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.541605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.541943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.541950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.542126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.542134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.542439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.542446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.542786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.542794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.543003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.543010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.543325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.543333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.543700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.543707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.543888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.543896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.544237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.544244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.544554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.544560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.544904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.544910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.545121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.368 [2024-07-15 21:20:29.545128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.368 qpair failed and we were unable to recover it. 00:30:02.368 [2024-07-15 21:20:29.545338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.545344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.545649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.545655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.545997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.546003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.546246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.546252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.546585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.546591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.546943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.546949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.547309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.547316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.547640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.547647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.548056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:02.369 [2024-07-15 21:20:29.548117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.548124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.548422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.548429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.548778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.548785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.549105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.549112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.549437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.549444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.549867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.549874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.550200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.550206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.550406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.550414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.550733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.550740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.551084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.551090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.551246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.551253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.551556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.551563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.551965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.551972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.552317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.552325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.552704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.552711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.552951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.552958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.553174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.553181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.553432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.553439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.553851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.553858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.554239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.554246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.554587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.554594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.554945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.554959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.555279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.555287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.555619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.555625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.555973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.555980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.556312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.556318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.556578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.556584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.369 [2024-07-15 21:20:29.556941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.369 [2024-07-15 21:20:29.556948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.369 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.557289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.557296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.557625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.557632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.557842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.557849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.558260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.558267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.558681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.558688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.559107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.559114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.559443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.559451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.559604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.559612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.559966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.559972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.560306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.560313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.560687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.560696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.560990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.560997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.561317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.561323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.561637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.561644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.561856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.561863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.562227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.562236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.562548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.562555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.562900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.562907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.563110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.563117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.563327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.563334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.563689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.563696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.564026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.564032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.564293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.564300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.564656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.564663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.564910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.564917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.565276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.565283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.565686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.565693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.566022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.566029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.566386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.566393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.566748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.566755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.566948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.566954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.567303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.567310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.567667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.567674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.568027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.370 [2024-07-15 21:20:29.568034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.370 qpair failed and we were unable to recover it. 00:30:02.370 [2024-07-15 21:20:29.568393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.568400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.568735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.568742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.569058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.569065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.569239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.569246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.569537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.569544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.569914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.569921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.570281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.570289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.570603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.570610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.570974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.570980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.571348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.571355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.571711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.571718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.571864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.571870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.572281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.572288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.572616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.572623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.572992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.572998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.573367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.573374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.573729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.573739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.573943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.573949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.574298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.574311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.574517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.574524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.574858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.574865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.575072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.575078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.575383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.575390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.575772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.575778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.576100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.576107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.576335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.576342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.576565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.576572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.576778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.576784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.577088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.577096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.577415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.577422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.577780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.577787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.578027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.578033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.578378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.578386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.578616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.578622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.371 qpair failed and we were unable to recover it. 00:30:02.371 [2024-07-15 21:20:29.578967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.371 [2024-07-15 21:20:29.578974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.579309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.579316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.579547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.579554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.579917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.579923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.580211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.580217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.580478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.580485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.580690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.580698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.581034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.581042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.581232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.581240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.581577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.581584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.581962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.581969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.582360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.582367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.582721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.582728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.583075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.583082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.583321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.583328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.583688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.583695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.583899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.583906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.584276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.584283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.584637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.584644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.584987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.584994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.585309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.585317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.585689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.585696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.586023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.586033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.586398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.586405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.586744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.586752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.586989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.586996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.587212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.587220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.587578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.587585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.587789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.587795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.588161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.588168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.588488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.588496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.588842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.588849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.589228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.589238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.589550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.589557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.589925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.589931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.590300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.590307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.372 qpair failed and we were unable to recover it. 00:30:02.372 [2024-07-15 21:20:29.590730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.372 [2024-07-15 21:20:29.590737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.590926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.590932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.591235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.591242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.591469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.591475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.591696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.591704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.591974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.591981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.592313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.592319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.592626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.592633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.592692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.592699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.593048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.593054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.593429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.593436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.593753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.593759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.594107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.594113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.594452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.594459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.594676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.594683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.594991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.594998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.595356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.595363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.595574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.595580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.595859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.595866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.596183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.596190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.596488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.596495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.596597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.596604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.596831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.596837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.597053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.597059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.597406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.597414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.597747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.597753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.597954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.597962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.598298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.598305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.373 qpair failed and we were unable to recover it. 00:30:02.373 [2024-07-15 21:20:29.598662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.373 [2024-07-15 21:20:29.598669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.599013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.599019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.599276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.599282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.599503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.599509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.599694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.599701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.599926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.599934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.600292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.600299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.600658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.600665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.601014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.601022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.601224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.601235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.601601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.601609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.601967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.601974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.602216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.602222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.602584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.602591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.602917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.602924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.603273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.603280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.603652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.603658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.604022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.604028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.604370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.604378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.604757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.604763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.605126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.605132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.605556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.605564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.605773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.605780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.605979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.605986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.606324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.606331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.606667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.606674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.607034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.607042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.607393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.607400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.607742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.607749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.608105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.608117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.608455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.608462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.608865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.608871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.609206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.609213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.609545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.609552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.609866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.609872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.610191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.610197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.610397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.374 [2024-07-15 21:20:29.610403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.374 qpair failed and we were unable to recover it. 00:30:02.374 [2024-07-15 21:20:29.610736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.610742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.611078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.611087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.611294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.611301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.611619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.611625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.611969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.611976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.612198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.612204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.612561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.612568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.612821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.612828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.613040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.613047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.613416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.613423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.613800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.613806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.614178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.614185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.614534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.614541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.614888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.614895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.615179] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:02.375 [2024-07-15 21:20:29.615206] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:02.375 [2024-07-15 21:20:29.615212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.615217] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:02.375 [2024-07-15 21:20:29.615219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.615224] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:02.375 [2024-07-15 21:20:29.615234] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:02.375 [2024-07-15 21:20:29.615572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.615579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.615818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.615826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.615723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:02.375 [2024-07-15 21:20:29.615844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:02.375 [2024-07-15 21:20:29.615970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:02.375 [2024-07-15 21:20:29.615971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:02.375 [2024-07-15 21:20:29.616187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.616194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.616520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.616527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.616773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.616779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.616983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.616990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.617243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.617250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.617500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.617507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.617766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.617773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.618098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.618104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.618429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.618444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.618629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.618637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.618892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.618899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.375 [2024-07-15 21:20:29.619248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.375 [2024-07-15 21:20:29.619255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.375 qpair failed and we were unable to recover it. 00:30:02.659 [2024-07-15 21:20:29.619595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-07-15 21:20:29.619602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-07-15 21:20:29.619834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-07-15 21:20:29.619842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-07-15 21:20:29.620237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-07-15 21:20:29.620244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.620579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.620586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.620743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.620749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.621062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.621069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.621374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.621381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.621629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.621635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.621858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.621864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.622217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.622223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.622646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.622664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.622956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.622964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.623324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.623331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.623566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.623573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.623807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.623814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.624177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.624184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.624531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.624538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.624781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.624788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.625097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.625104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.625468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.625475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.625838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.625845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.626040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.626046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.626361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.626368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.626604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.626610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.626955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.626962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.627297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.627304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.627664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.627671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.627965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.627972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.628246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.628253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.628570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.628576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.628815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.628821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.629197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.629204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.629427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.629434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.629623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.629630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.629701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.629707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.630091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.630098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.630400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.630407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.630755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.630762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.631109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.631115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.631477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.631484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-07-15 21:20:29.631770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-07-15 21:20:29.631777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.631994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.632000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.632213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.632219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.632514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.632521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.632828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.632835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.633173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.633179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.633311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.633318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.633406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.633412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.633719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.633727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.634101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.634110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.634359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.634366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.634573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.634579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.634817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.634824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.635191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.635198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.635547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.635555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.635752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.635759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.636129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.636136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.636235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.636242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.636594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.636601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.636918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.636926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.637284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.637292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.637677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.637691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.638032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.638039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.638245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.638252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.638483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.638491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.638712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.638720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.639096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.639104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.639317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.639324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.639675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.639684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.639902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.639909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.640139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.640145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.640306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.640313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.640725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.640732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.640951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.640958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.641213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.641221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.641472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.641480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.641826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.661 [2024-07-15 21:20:29.641833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.661 qpair failed and we were unable to recover it. 00:30:02.661 [2024-07-15 21:20:29.642129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.642137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.642468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.642476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.642737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.642744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.643110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.643118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.643471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.643479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.643551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.643557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.643761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.643768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.644003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.644010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.644213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.644221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.644589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.644597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.644938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.644945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.645021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.645028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.645361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.645371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.645587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.645595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.645943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.645951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.646171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.646179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.646549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.646557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.646771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.646779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.646989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.646997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.647282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.647289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.647361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.647367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.647712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.647719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.648065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.648073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.648256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.648263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.648472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.648480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.648678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.648685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.649028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.649036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.649331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.649339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.649613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.649620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.649960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.649968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.650257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.650264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.650517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.650524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.662 qpair failed and we were unable to recover it. 00:30:02.662 [2024-07-15 21:20:29.650874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.662 [2024-07-15 21:20:29.650880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.651026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.651032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.651259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.651266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.651599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.651605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.651909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.651916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.652268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.652275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.652621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.652629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.652987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.652993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.653244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.653251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.653458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.653465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.653785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.653792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.653905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.653911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.654102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.654109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.654447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.654454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.654804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.654811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.655136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.655142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.655323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.655329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.655662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.655669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.656014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.656021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.656426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.656433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.656623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.656631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.656989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.656995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.657326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.657333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.657583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.657590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.657954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.657961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.658285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.658292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.658537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.658543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.658867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.658873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.659098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.659104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.659297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.659305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.659591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.659598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.659925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.659931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.660291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.660297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.660648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.660655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.660890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.660896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.661261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.661268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.661586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.661592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.661953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.661960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.662291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.662298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.662344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.662351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.663 [2024-07-15 21:20:29.662664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.663 [2024-07-15 21:20:29.662670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.663 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.662973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.662980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.663188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.663195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.663296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.663303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.663639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.663646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.664048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.664055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.664414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.664420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.664775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.664782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.665045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.665052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.665275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.665281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.665636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.665643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.666016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.666031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.666381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.666388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.666724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.666731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.666953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.666959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.667179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.667185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.667524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.667531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.667772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.667778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.668004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.668010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.668330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.668337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.668692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.668700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.669036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.669042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.669233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.669241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.669517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.669524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.669757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.669764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.669936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.669943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.670153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.670159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.670376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.670382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.670597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.670603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.670853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.670859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.671065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.671072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.671293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.671300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.664 qpair failed and we were unable to recover it. 00:30:02.664 [2024-07-15 21:20:29.671526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.664 [2024-07-15 21:20:29.671532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.671853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.671860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.672087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.672094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.672424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.672431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.672660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.672667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.673051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.673057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.673564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.673571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.673897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.673903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.674202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.674209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.674418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.674425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.674744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.674750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.675085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.675092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.675403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.675410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.675742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.675749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.676068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.676075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.676425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.676432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.676659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.676666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.676889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.676896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.677235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.677242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.677505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.677511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.677879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.677886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.678216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.678222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.678614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.678621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.678907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.678913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.679141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.679147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.679396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.679403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.679733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.679739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.680094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.680100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.680303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.680311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.680702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.680708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.681041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.681047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.681333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.681339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.681701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.681707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.682051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.682058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.682258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.682265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.682452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.682459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.682787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.682793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.683158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.683165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.683393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.683400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.683665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.683672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-07-15 21:20:29.684045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.665 [2024-07-15 21:20:29.684052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.684399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.684406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.684728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.684735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.684986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.684992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.685385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.685392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.685758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.685765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.686111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.686117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.686206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.686212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.686259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.686265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.686489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.686496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.686792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.686798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.687124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.687130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.687491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.687498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.687848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.687854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.688190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.688196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.688367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.688374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.688693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.688701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.688953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.688959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.689292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.689300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.689650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.689657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.690040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.690046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.690417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.690424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.690643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.690649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.691078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.691084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.691410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.691417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.691765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.691771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.692142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.692149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.692358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.692366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.692557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.692566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.692934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.692941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.693275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.693282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.693628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.693635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.693971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.693977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.694297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.694304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.694636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.694643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.694958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.694965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.695336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.695343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.695601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.695607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.695976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.695983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.696030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.696037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.696250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.696257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-07-15 21:20:29.696565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.666 [2024-07-15 21:20:29.696571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.696883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.696890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.697222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.697228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.697412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.697418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.697606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.697614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.697918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.697924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.698204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.698211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.698399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.698405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.698613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.698620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.698847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.698853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.699274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.699281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.699480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.699487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.699597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.699603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.699921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.699928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.700278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.700285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.700604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.700610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.700933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.700939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.701125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.701131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.701419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.701426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.701787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.701793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.702121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.702127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.702588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.702595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.702916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.702923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.703281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.703288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.703482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.703488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.703808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.703815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.704160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.704167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.704508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.704517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.704750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.704757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.704973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.704986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.705384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.705391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.705734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.705741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.706080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.706087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.706436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.706443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.706649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.706655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.706995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.707001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.707262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.707269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.707631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.707637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.707980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.707986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.708306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.708312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.708370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.667 [2024-07-15 21:20:29.708377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.667 qpair failed and we were unable to recover it. 00:30:02.667 [2024-07-15 21:20:29.708530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.708537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.708855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.708861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.709061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.709068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.709386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.709393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.709715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.709722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.710068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.710074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.710399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.710406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.710768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.710782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.711116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.711123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.711460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.711467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.711830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.711836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.712149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.712155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.712376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.712383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.712723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.712729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.713045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.713051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.713374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.713381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.713743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.713749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.713934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.713940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.714296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.714303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.714524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.714531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.714792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.714799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.715154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.715161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.715500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.715507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.715845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.715852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.716029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.716035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.716198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.716204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.716263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.716271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.716588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.716595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.716911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.716917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.717117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.717124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.717408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.717415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.717654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.717661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.717963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.717970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.718308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.718315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.668 qpair failed and we were unable to recover it. 00:30:02.668 [2024-07-15 21:20:29.718687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.668 [2024-07-15 21:20:29.718693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.718743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.718750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.719121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.719128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.719482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.719489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.719808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.719815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.720027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.720033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.720254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.720261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.720647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.720654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.721031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.721038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.721355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.721362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.721716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.721723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.721911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.721917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.722101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.722107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.722536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.722542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.722866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.722872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.723226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.723239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.723582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.723588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.723936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.723942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.724118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.724124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.724460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.724467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.724621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.724627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.724967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.724973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.725295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.725301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.725653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.725659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.725977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.725984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.726180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.726187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.726396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.726403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.726611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.726618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.726812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.726818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.727141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.727148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.727572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.727579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.727917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.727925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.727973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.727982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.728172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.728180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.728513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.728520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.728837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.728845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.729186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.729193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.729546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.729553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.729923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.729930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.730283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.730290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.730526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.730533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.669 qpair failed and we were unable to recover it. 00:30:02.669 [2024-07-15 21:20:29.730914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.669 [2024-07-15 21:20:29.730920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.731262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.731268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.731623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.731629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.731985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.731991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.732145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.732151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.732341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.732348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.732665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.732671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.732989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.732995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.733342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.733348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.733499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.733507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.733876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.733882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.734243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.734250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.734492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.734499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.734843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.734850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.735044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.735050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.735426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.735432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.735757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.735763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.735936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.735943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.736280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.736287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.736594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.736601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.736749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.736754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.737116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.737123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.737479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.737486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.737799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.737806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.738009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.738016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.738112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.738118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.738429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.738436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.738659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.738666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.738967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.738975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.739155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.739163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.739521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.739529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.739900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.739909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.740107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.740114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.740312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.740319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.740635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.740641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.740985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.740991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.741317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.741323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.741676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.741682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.742010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.742016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.742368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.742375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.670 [2024-07-15 21:20:29.742729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.670 [2024-07-15 21:20:29.742735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.670 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.743056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.743063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.743307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.743313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.743690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.743696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.744019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.744025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.744238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.744245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.744606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.744613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.744944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.744951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.745298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.745304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.745613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.745619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.745861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.745867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.746087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.746094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.746508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.746514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.746807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.746814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.747001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.747007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.747215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.747222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.747568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.747574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.747932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.747940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.748286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.748293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.748638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.748645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.748975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.748982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.749296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.749303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.749465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.749472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.749811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.749817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.750136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.750143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.750486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.750493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.750874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.750880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.751069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.751077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.751252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.751260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.751605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.751612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.751931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.751937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.752139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.752148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.752460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.752467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.752680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.752686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.752991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.752998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.753368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.753375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.753715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.753721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.753897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.671 [2024-07-15 21:20:29.753904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.671 qpair failed and we were unable to recover it. 00:30:02.671 [2024-07-15 21:20:29.754112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.754118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.754385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.754391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.754713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.754719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.755067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.755073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.755396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.755403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.755587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.755594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.755808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.755814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.756100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.756107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.756444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.756451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.756818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.756827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.757141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.757148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.757330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.757337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.757670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.757677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.757923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.757930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.758133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.758141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.758216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.758223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.758344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.758351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.758576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.758583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.758913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.758919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.759280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.759286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.759634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.759641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.759821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.759828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.760162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.760169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.760382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.760388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.760607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.760614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.760970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.760977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.761200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.761207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.761551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.761559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.761917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.761924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.762269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.762276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.762638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.762644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.762924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.762931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.763106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.763112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.763444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.672 [2024-07-15 21:20:29.763452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-07-15 21:20:29.763669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.763676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.764035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.764049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.764204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.764210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.764510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.764517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.764849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.764855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.765106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.765113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.765483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.765490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.765682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.765688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.765940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.765947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.766142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.766149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.766413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.766420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.766770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.766777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.766976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.766984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.767277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.767284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.767441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.767447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.767641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.767648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.767978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.767985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.768309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.768316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.768728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.768735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.769100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.769107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.769507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.769513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.769859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.769866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.770116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.770124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.770272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.770279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.770460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.770466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.770519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.770525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.770742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.770749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.770961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.770967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.771313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.771319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.771639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.771646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.771985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.771993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.772280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.772287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.772645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.772652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.772703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.772710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-07-15 21:20:29.772860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.673 [2024-07-15 21:20:29.772867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.773103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.773110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.773319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.773326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.773632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.773639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.774018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.774026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.774377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.774385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.774578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.774585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.774976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.774983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.775344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.775351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.775715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.775723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.776010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.776016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.776207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.776214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.776572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.776578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.776912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.776919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.777342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.777349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.777649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.777655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.778037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.778044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.778377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.778383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.778599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.778605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.778937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.778944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.779143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.779152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.779349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.779356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.779722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.779729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.780058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.780066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.780247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.780253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.780454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.780460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.780807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.780813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.781017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.781023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.781272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.781279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.781469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.781475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.781795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.781802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.782026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.782033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.782368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.782376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.782626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.782633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.782979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.782986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.674 [2024-07-15 21:20:29.783177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.674 [2024-07-15 21:20:29.783183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.674 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.783538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.783545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.783875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.783882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.784131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.784137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.784437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.784444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.784813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.784820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.785135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.785142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.785497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.785504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.785869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.785876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.786201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.786209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.786565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.786572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.786915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.786922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.787268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.787275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.787612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.787618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.787976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.787982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.788283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.788290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.788712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.788719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.788910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.788917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.789266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.789273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.789464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.789471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.789887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.789894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.790131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.790138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.790490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.790496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.790758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.790764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.791110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.791116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.791478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.791485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.791800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.791806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.792026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.792033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.792359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.792365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.792585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.792591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.792801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.792808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.793106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.793113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.793484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.793491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.793829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.793835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.794162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.794168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.794520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.794526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.794598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.794605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.675 qpair failed and we were unable to recover it. 00:30:02.675 [2024-07-15 21:20:29.794811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.675 [2024-07-15 21:20:29.794819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.795148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.795155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.795584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.795591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.795664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.795670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.796003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.796010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.796210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.796218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.796548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.796556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.796894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.796901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.797247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.797254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.797520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.797526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.797893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.797899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.798235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.798242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.798566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.798572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.799004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.799010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.799360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.799367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.799585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.799592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.799951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.799957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.800166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.800172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.800528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.800535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.800852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.800858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.801055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.801062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.801429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.801436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.801758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.801765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.802169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.802177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.802383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.802391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.802692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.802700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.803069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.803077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.803292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.803298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.803469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.803475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.803705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.803711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.803937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.803943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.804156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.804162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.804509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.804516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.804868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.804874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.805021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.805026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.805369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.805376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.805723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.805729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.806054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.806060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.806391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.806398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.806652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.676 [2024-07-15 21:20:29.806658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.676 qpair failed and we were unable to recover it. 00:30:02.676 [2024-07-15 21:20:29.807007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.807017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.807359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.807366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.807578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.807585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.807944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.807950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.808277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.808284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.808626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.808632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.808806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.808813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.809119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.809126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.809470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.809476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.809838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.809845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.810043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.810050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.810347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.810353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.810581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.810587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.810804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.810811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.811182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.811188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.811369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.811376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.811741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.811748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.812031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.812038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.812373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.812380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.812714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.812720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.812888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.812894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.813115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.813121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.813434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.813440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.813779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.813786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.814153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.814167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.814510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.814516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.814753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.814759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.815117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.815123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.815317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.815324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.815554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.815561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.815799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.815805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.815978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.677 [2024-07-15 21:20:29.815984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.677 qpair failed and we were unable to recover it. 00:30:02.677 [2024-07-15 21:20:29.816336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.816343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.816643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.816649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.816934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.816941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.817118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.817125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.817366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.817373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.817590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.817597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.817881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.817887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.818187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.818194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.818506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.818514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.818708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.818714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.818906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.818914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.819262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.819269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.819469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.819475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.819852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.819859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.820098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.820104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.820304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.820312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.820369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.820376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.820798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.820805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.821054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.821061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.821391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.821399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.821617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.821625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.821832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.821839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.822181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.822188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.822547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.822555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.822711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.822718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.822889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.822897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.823249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.823256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.823572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.823578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.823978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.823985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.824321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.824327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.824681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.824687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.824904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.824911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.825216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.825223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 21:20:29.825582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.678 [2024-07-15 21:20:29.825589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.825929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.825935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.826240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.826246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.826596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.826602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.826846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.826853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.827193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.827200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.827318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.827328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.827668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.827675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.827867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.827874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.828070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.828077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.828398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.828405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.828639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.828645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.828832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.828839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.829082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.829088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.829146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.829152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.829489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.829497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.829802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.829809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.830168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.830174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.830511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.830519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.830739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.830746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.830971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.830978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.831176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.831183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.831417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.831424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.831761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.831768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.831976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.831984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.832351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.832358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.832760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.832767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.832934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.832941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.833104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.833110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.833336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.833343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.833695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.833701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.833916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.833923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.834260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.834267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 21:20:29.834491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.679 [2024-07-15 21:20:29.834497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.834841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.834848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.835097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.835104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.835450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.835457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.835694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.835700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.835853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.835860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.836212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.836219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.836567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.836574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.836788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.836795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.837133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.837141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.837472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.837480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.837819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.837827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.837872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.837879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.838215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.838222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.838392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.838400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.838768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.838775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.838972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.838979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.839221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.839228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.839572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.839578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.839776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.839782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.839996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.840002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.840184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.840190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.840360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.840369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.840681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.840687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.841022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.841028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.841243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.841250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.841583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.841590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.841971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.841977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.842139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.842146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.842499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.842506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.842819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.842826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.843019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.843025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.843397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.843404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.843806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.843813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.844112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.844118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.844309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.680 [2024-07-15 21:20:29.844316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 21:20:29.844520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.844526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.844797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.844803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.845133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.845139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.845316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.845323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.845523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.845529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.845730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.845737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.846125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.846133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.846325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.846332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.846695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.846703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.846950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.846957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.847176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.847183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.847567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.847574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.847801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.847808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.848235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.848242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.848560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.848567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.848910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.848916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.849239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.849246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.849580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.849586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.849941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.849947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.850129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.850136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.850359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.850366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.850697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.850703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.850907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.850913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.851254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.851260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.851423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.851430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.851826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.851832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.852148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.852156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.852501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.852508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.852629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.852636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.852933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.852940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.853285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.853291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.853487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.853493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.853682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.853690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.854001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.854008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.854242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.854249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.854615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.854621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.854950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.854956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.855318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.855324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.855673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.855679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.855931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.855938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.856292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.681 [2024-07-15 21:20:29.856299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-07-15 21:20:29.856502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.856509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.856920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.856926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.857267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.857274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.857613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.857621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.857998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.858004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.858342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.858348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.858644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.858651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.858852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.858860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.859221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.859231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.859568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.859574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.859896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.859903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.860249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.860256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.860573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.860580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.860921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.860928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.861121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.861127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.861498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.861505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.861898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.861905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.862281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.862288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.862482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.862488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.862780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.862787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.862973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.862979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.863030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.863036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.863221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.863227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.863514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.863521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.863750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.863757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.864146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.864156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.864490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.864498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.864875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.864882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.865205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.865213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.865549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.865556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.865605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.865611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.865939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.865946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.866178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.866185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-07-15 21:20:29.866324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-07-15 21:20:29.866332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.866704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.866711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.867044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.867051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.867258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.867265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.867578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.867585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.867788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.867795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.868135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.868142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.868438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.868446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.868646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.868653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.868869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.868876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.869235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.869242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.869446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.869453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.869653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.869659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.869923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.869930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.870275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.870281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.870520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.870526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.870845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.870852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.871199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.871205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.871411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.871417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.871757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.871764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.871969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.871976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.872349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.872356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.872684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.872691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.872910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.872917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.873279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.873286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.873666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.873678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.874071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.874078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.874365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.874372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.874692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.874699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.874873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.874880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.875198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.875206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.875528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.875535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.875862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.875871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.876073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.876080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.876405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.876412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.876726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.876733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.876780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.876787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.877117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.877124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.877425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.877432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.877648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.877655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.877827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-07-15 21:20:29.877834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-07-15 21:20:29.878200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.878207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.878516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.878524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.878581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.878587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.878753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.878760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.879129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.879136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.879384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.879397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.879568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.879574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.879759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.879765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.880086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.880092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.880400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.880406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.880621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.880627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.880995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.881002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.881175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.881181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.881409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.881416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.881585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.881591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.881782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.881788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.882006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.882013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.882389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.882396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.882713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.882720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.882941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.882948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.883310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.883317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.883390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.883395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.883642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.883648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.883730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.883736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.884029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.884035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.884294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.884301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.884508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.884516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.884823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.884830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.885145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.885153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.885470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.885476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.885723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.885729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.886081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.886089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.886411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.886418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.886618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.886625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.886812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.886820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.887051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.887057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.887411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.887419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.887632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.887638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.887822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.887828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.888189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.888195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-07-15 21:20:29.888398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-07-15 21:20:29.888406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.888783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.888789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.888973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.888980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.889107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.889114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.889274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.889281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.889661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.889667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.889916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.889923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.890209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.890216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.890545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.890552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.890932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.890938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.891162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.891169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.891251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.891257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.891421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.891428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.891481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.891488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.891824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.891830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.892164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.892171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.892511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.892518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.892693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.892700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.892927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.892934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.893002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.893008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.893211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.893218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.893474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.893481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.893791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.893798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.894136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.894143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.894486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.894493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.894824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.894831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.895179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.895186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.895531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.895539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.895885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.895892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.896246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.896254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.896462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.896469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.896670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.896678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.896854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.896860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.897181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.897187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.897263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.897269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.897472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.897479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.897645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.897652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.897978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.897985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.898308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.898315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.898564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.898570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.898772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.898779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-07-15 21:20:29.899141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-07-15 21:20:29.899147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.899395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.899402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.899786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.899792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.900116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.900122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.900294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.900301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.900566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.900573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.900829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.900835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.901176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.901182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.901382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.901389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.901688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.901695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.901918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.901925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.902092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.902098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.902261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.902268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.902589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.902596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.902821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.902827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.903181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.903188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.903414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.903422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.903766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.903773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.904117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.904123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.904454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.904461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.904724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.904731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.905095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.905101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.905368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.905374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.905704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.905711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.906087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.906093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.906443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.906450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.906803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.906810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.907014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.907020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.907071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.907079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.907405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.907413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.907709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.907717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.907946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.907952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.908181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.908187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.908551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.908558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.908883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.908890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.909085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.909092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.909464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.909471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.909797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.909803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.910039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.910046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.910391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.910398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.910729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.910735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-07-15 21:20:29.910948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-07-15 21:20:29.910954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.911295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.911302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.911638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.911644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.912001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.912008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.912210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.912217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.912552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.912559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.912899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.912906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.913283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.913290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.913652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.913658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.913997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.914003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.914389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.914396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.914737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.914744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.914934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.914941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.915295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.915310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.915626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.915633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.915874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.915881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.916021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.916028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.916395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.916402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.916608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.916615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.916962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.916969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.917057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.917063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.917382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.917389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.917730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.917736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.917937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.917944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.918183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.918190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.918342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.918348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.918613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.918620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.918954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.918962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.919145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.919152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.919440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.919446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.919757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.919763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.919963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.919969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.920314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.920321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.920666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.920673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.920883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.920889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-07-15 21:20:29.921239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-07-15 21:20:29.921246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.921669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.921675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.922000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.922006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.922337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.922344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.922685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.922691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.923043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.923049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.923110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.923116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.923302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.923309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.923394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.923400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.923617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.923623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.923687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.923694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.923872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.923880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.924185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.924191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.924371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.924377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.924571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.924577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.924771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.924777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.925164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.925170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.925482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.925489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.925840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.925847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.926093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.926099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.926298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.926305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-07-15 21:20:29.926617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-07-15 21:20:29.926626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.965 [2024-07-15 21:20:29.926956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.926964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.927012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.927019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.927162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.927167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.927491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.927499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.927844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.927850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.928066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.928072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.928318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.928325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.928409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.928415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.928823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.928830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.929193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.929199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.929402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.929409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.929708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.929714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.930038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.930045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.930097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.930103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.930413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.930420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.930592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.930598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.930961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.930967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.931300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.931306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.931553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.931560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.931785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.931792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.932110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.932117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.932310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.932317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.932616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.932622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.932835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.932841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.933022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.933028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.933386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.933392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.933749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.933756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.934132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.934138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.934535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.934542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.934745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.934752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.934970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.934976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.935188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.935195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.935534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.935541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.935777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.935783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.935996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.936003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.936241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.936248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.936613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.936620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.936953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.936960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.937311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.937318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.937502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.937510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.937809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.937815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.938014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.938021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.938360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.938366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.938535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.938541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.938867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.938873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.939206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.939212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.939607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.939614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.939793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.939799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.940106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.940113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.940395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.940403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-07-15 21:20:29.940741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-07-15 21:20:29.940747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.940927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.940934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.941300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.941307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.941670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.941677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.941954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.941960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.942298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.942304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.942672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.942678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.943047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.943053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.943404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.943411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.943739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.943746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.943960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.943967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.944168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.944175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.944443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.944450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.944813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.944820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.944990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.944996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.945339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.945346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.945528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.945535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.945899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.945905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.946214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.946221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.946594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.946601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.946924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.946930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.947188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.947194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.947411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.947418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.947705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.947711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.947935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.947941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.948294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.948301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.948706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.948712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.948893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.948899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.949206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.949213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.949521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.949530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.949869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.949876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.950239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.950246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.950558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.950565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.950898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.950904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.951092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.951099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.951313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.951320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.951643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.951649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.951972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.951979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.952224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.952233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.952432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.952439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.952747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.952753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-07-15 21:20:29.952903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-07-15 21:20:29.952911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.953160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.953167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.953515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.953523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.953773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.953779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.954098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.954105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.954438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.954445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.954767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.954774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.955101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.955107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.955454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.955461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.955814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.955821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.956006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.956012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.956181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.956187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.956588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.956595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.956826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.956833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.957189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.957195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.957454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.957461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.957811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.957818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.958017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.958023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.958376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.958383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.958717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.958724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.959136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.959142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.959479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.959485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.959878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.959885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.960237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.960244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.960430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.960437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.960525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.960531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.960809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.960815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.961183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.961198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.961580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.961588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.961913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.961919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.962114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.962121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.962472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.962478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.962779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.962785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.963021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.963027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.963204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.963211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.963388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.963395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.963697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.963703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.963900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.963907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.964264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.964271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.964662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.964669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.965019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-07-15 21:20:29.965026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-07-15 21:20:29.965410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.965417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.965758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.965764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.966094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.966100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.966141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.966147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.966464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.966471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.966657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.966664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.967004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.967011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.967061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.967066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.967429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.967436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.967768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.967774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.968105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.968111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.968356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.968362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.968707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.968713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.969093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.969100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.969315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.969322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.969501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.969507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.969718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.969725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.970074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.970089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.970478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.970485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.970815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.970822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.971054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.971060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.971457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.971464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.971805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.971812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.972182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.972189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.972536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.972543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.972830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.972836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.972983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.972990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.973329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.973338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.973686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.973701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.973749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.973756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.974085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.974091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.974427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.974434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.974773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.974780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.975096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.975102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.975433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.975440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.975758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.975766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-07-15 21:20:29.975948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-07-15 21:20:29.975955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.976219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.976225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.976549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.976556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.976911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.976919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.977253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.977260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.977457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.977464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.977693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.977699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.977920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.977928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.978152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.978159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.978423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.978430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.978773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.978779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.979033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.979040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.979154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.979168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.979491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.979498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.979692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.979699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.979874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.979881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.980225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.980236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.980581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.980588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.980921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.980928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.981126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.981133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.981340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.981347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.981548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.981556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.981846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.981852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.982091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.982097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.982423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.982429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.982761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.982769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.983108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.983115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.983510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.983517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.983850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.983858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.984206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.984212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.984536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.984543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.984725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.984734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.984928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.984935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.985288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.985295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.985506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.985513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.985858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.985865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.986188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.986194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.986539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-07-15 21:20:29.986545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-07-15 21:20:29.986893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.986900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.987152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.987158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.987384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.987391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.987559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.987566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.987928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.987935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.988111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.988118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.988416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.988423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.988789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.988796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.988840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.988846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.988896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.988902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.989120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.989126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.989472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.989480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.989525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.989531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.989857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.989863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.990018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.990025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.990416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.990422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.990768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.990775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.991157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.991163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.991497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.991503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.991803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.991810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.992170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.992177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.992368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.992375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.992627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.992634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.992954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.992961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.993282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.993290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.993652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.993658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.993903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.993909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.994097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.994104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.994484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.994491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.994837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.994844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.995217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.995224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.995582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.995589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.995909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.995916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.996248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.996257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.996604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.996612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.996961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.996968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.997166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.997173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.997490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.997498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.997847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.997854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.998181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.998187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-07-15 21:20:29.998511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-07-15 21:20:29.998518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:29.998745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:29.998752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:29.999063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:29.999070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:29.999414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:29.999421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:29.999667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:29.999674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.000045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.000051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.000366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.000373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.000742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.000749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.001109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.001116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.001522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.001529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.001870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.001877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.002753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.002767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.002989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.002997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.003414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.003421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.003649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.003656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.003859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.003867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.004041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.004049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.004323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.004330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.004620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.004627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.005061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.005068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.005279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.005287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.005638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.005645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.005743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.005750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.006000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.006006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.006245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.006252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.006576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.006583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.006757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.006764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.007091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.007098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.007321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.007328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.007641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.007649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.007941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.007948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.008134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.008140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.008479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.008487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.008697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.008707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.008943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.008951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.009300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.009308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.009497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.009503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.009897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.009905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.010219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.010226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.010626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.010633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.010952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.010958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-07-15 21:20:30.011275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-07-15 21:20:30.011282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.011617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.011624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.012007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.012013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.012192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.012200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.012298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.012306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.012690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.012697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.013016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.013024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.013366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.013373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.013546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.013553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.013908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.013915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.014238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.014245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.014566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.014573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.014786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.014793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.015017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.015024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.015328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.015335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.015545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.015552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.015910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.015917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.016262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.016270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.016662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.016669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.017039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.017045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.017409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.017416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.017760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.017767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.018083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.018090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.018278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.018287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.018595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.018602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.018968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.018976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.019318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.019325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.019567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.019574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.019769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.019776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.020123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.020130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.020483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.020490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.020630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.020636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.020866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.020875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.021183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.021189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.021507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.021514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.021888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.021895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.022148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.022155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.022552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.022560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.022780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.022787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.022837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.022845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.023055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.023062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-15 21:20:30.023444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-15 21:20:30.023452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.023672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.023680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.024040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.024048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.024386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.024398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.024616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.024649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.025062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.025086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.025287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.025298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.025471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.025481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.025859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.025867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.026320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.026327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.026591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.026597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.026937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.026945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.027263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.027271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.027762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.027769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.028095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.028102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.028347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.028353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.028717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.028724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.029078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.029084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.029440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.029447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.029628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.029635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.029891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.029898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.030254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.030261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.030661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.030668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.031006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.031014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.031387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.031394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.031589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.031595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.031909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.031916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.032212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.032219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.032414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.032422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.032640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.032648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.032933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.032940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.033308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.033317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.033661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.033668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.033863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.033871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.034235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.034243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-15 21:20:30.034357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-15 21:20:30.034365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.034584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.034591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.034789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.034797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.035038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.035045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.035387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.035394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.035718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.035725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.036028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.036034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.036373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.036380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.036727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.036734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.037082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.037089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.037159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.037166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.037483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.037491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.037810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.037818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.038179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.038185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.038405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.038412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.038776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.038783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.039109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.039116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.039469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.039476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.039840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.039847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.040223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.040239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.040415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.040424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.040796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.040803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.041032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.041039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.041293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.041301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.041648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.041656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.042019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.042026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.042393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.042401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.042781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.042789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.043003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.043010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.043380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.043388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.043448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.043455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.043786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.043793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.044118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.044126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.044448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.044456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.044823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.044830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.045188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.045195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.045426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.045435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.045779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.045786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.046158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.046165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.046483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.046495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.046838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-15 21:20:30.046844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-15 21:20:30.046925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.046931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.047252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.047266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.047606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.047612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.047884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.047891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.048237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.048245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.048454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.048460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.048800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.048807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.049131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.049138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.049491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.049498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.049701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.049707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.050044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.050051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.050398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.050405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.050816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.050823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.051150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.051157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.051482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.051489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.051835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.051842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.052026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.052032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.052182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.052189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.052500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.052507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.052840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.052847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.053070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.053077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.053468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.053476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.053845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.053852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.054165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.054172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.054529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.054535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.054851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.054858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.055205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.055212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.055421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.055428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.055633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.055640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.055882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.055889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.056089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.056095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.056283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.056290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.056655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.056661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.056848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.056855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.057123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.057130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.057339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.057348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.057686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.057693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.057941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.057948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.058176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.058183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.058537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.058545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.058757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-15 21:20:30.058765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-15 21:20:30.059114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.059121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.059462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.059469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.059651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.059658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.059842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.059848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.060042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.060049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.060271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.060278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.060622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.060629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.060949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.060956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.061004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.061011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.061059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.061065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.061414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.061421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.061838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.061845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.062169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.062176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.062531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.062538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.062860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.062867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.063222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.063228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.063418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.063424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.063731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.063737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.063913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.063921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.064259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.064266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.064615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.064623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.065053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.065060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.065407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.065415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.065602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.065610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.065973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.065980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.066318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.066325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.066685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.066691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.066840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.066847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.067172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.067179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.067533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.067540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.067904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.067912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.068272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.068280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.068630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.068637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.069011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.069017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.069347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.069354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.069678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.069685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.069932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.069947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.070300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.070307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.070651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.070658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.070833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-15 21:20:30.070840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-15 21:20:30.071070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.071076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.071435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.071441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.071606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.071613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.071838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.071846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.072079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.072086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.072401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.072409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.072750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.072756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.073082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.073088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.073425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.073433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.073644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.073651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.073993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.074000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.074372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.074378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.074689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.074697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.075049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.075055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.075403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.075411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.075599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.075606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.075959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.075967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.076165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.076172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.076361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.076369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.076719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.076726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.076972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.076978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.077211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.077219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.077570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.077577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.077792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.077798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.078156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.078163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.078282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.078289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.078716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.078722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.079071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.079077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.079427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.079434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.079728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.079736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.079926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.079933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.080225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.080235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.080478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.080485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.080841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.080848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.081091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-15 21:20:30.081098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-15 21:20:30.081268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.081275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.081601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.081608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.081955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.081962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.082286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.082294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.082601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.082608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.083010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.083017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.083368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.083376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.083724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.083731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.083866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.083873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.084074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.084081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.084333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.084340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.084537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.084543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.084812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.084819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.085156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.085163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.085492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.085499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.085850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.085857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.086154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.086161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.086412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.086420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.086739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.086746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.087079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.087086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.087438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.087446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.087698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.087705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.087935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.087943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.088176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.088183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.088527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.088543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.088905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.088916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.089182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.089197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.089493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.089509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.089760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.089789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.089935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.089968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.090335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.090344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.090662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.090669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.091005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.091012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.091212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.091219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.091477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.091484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.091713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.091720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.091918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.091925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.092282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.092289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.092335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.092342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.092689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.092696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-15 21:20:30.093038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-15 21:20:30.093045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.093318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.093325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.093665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.093671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.094029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.094036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.094364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.094371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.094547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.094554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.094894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.094900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.095242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.095249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.095595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.095602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.095838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.095844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.096056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.096062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.096320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.096327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.096693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.096699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.096850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.096856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.097236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.097242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.097583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.097590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.097935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.097942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.098325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.098332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.098639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.098646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.098858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.098865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.099218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.099226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.099479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.099486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.099810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.099817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.099968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.099974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.100329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.100336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.100566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.100573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.100925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.100933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.101265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.101272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.101487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.101494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.101656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.101662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.102018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.102024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.102212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.102219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.102433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.102440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.102597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.102604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.102946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.102952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.103301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.103309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.103667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.103674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.103918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.103924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.104119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.104126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.104398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.104404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.104629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.104636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-15 21:20:30.104980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-15 21:20:30.104987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.105234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.105241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.105555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.105561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.105760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.105766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.106055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.106061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.106293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.106300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.106737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.106743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.106929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.106935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.107243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.107250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.107485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.107492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.107686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.107693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.107913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.107921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.108254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.108262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.108612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.108618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.109032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.109038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.109337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.109344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.109684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.109691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.110038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.110045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.110395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.110402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.110587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.110594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.110969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.110976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.111342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.111349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.111709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.111716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.112061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.112068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.112397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.112404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.112775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.112783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.113138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.113144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.113553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.113560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.113902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.113909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.114115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.114121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.114458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.114465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.114655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.114662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.114898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.114913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.115260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.115267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.115480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.115487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.115737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.115745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.116086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.116093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.116334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.116341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.116589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.116595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.116897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.116903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.117220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.117238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-15 21:20:30.117475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-15 21:20:30.117482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.117707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.117715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.118094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.118100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.118443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.118458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.118699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.118706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.118878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.118884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.119132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.119139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.119457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.119464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.119658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.119665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.120008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.120016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.120351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.120358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.120716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.120723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.121048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.121055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.121393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.121400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.121777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.121784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.122112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.122119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.122343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.122351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.122614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.122630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.122866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.122873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.123224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.123240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.123445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.123452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.123623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.123630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.123985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.123992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.124201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.124207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.124560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.124569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.124859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.124866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.125174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.125181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.125546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.125553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.125764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.125771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.125963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.125970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.126127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.126135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.126275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.126282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.126595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.126603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.126940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.126947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.127270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.127277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-15 21:20:30.127625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-15 21:20:30.127631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.128003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.128010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.128222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.128233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.128592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.128599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.128815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.128822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.129187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.129194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.129375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.129382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.129565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.129572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.129929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.129935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.130276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.130283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.130650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.130657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.130878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.130885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.131044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.131050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.131109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.131115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.131425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.131432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.131622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.131629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.131957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.131963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.132071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.132077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.132414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.132421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.132773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.132780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.133126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.133133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.133483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.133490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.133668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.133675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.133988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.133995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.134200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.134207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.134392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.134398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.134674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.134681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.134844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.134851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.135119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.135127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.135348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.135358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.135749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.135756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.136094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.136102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.136454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.136461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.136843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.136850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.137001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.137008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.137346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.137353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.137689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.137695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.137899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.137905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.138284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.138290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.138645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.138653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.139002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.139009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-15 21:20:30.139274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-15 21:20:30.139281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.139546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.139553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.139845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.139851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.140169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.140176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.140519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.140526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.140739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.140746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.141098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.141104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.141436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.141443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.141685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.141692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.141743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.141749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.141962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.141969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.142087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.142094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.142394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.142401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.142739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.142746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.143029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.143036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.143201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.143208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.143576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.143583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.143778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.143784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.144151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.144158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.144471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.144478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.144735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.144742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.145088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.145095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.145453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.145460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.145803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.145809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.146001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.146008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.146273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.146279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.146513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.146519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.146848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.146854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.147213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.147222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd9dc000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.147541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.147576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.147829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.147841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.148214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.148226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.148446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.148458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.148849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.148859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.149217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.149238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.149641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.149654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.150015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.150027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.150243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.150253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.150626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.150636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.151030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.151040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.151414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.151425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-15 21:20:30.151690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-15 21:20:30.151700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.152056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.152067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.152491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.152503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.152699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.152710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.152883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.152893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.153220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.153238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.153618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.153632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.153935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.153946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.154330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.154341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.154587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.154597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.154975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.154985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.155202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.155212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.155504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.155516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.155710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.155719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.155785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.155800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.156124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.156134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.156581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.156592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.156919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.156929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.157262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.157272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.157646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.157657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.157856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.157866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.158243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.158254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.158466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.158477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.158871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.158881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.159239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.159250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.159452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.159463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.159747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.159757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.160099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.160109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.160483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.160495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.160911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.160921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.161219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.161245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.161695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.161705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.162066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.162076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.162304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.162314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.162728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.162739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.163068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.163079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.163440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.163451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.163827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.163841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.164174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.164188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.164589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.164601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.164820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.164830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.165238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-07-15 21:20:30.165249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-07-15 21:20:30.165605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.165616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.165817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.165827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.166091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.166101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.166476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.166490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.166867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.166877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.167130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.167141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.167493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.167504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.167875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.167885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.168218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.168240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.168608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.168619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.168969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.168979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.169202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.169212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.169464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.169475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.169745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.169755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.169933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.169944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.170151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.170161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.170394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.170404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.170674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.170685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.171051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.171062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.171419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.171430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.171584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.171594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.171920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.171930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.172135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.172145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.172517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.172527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.172721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.172733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.173103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.173113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.173439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.173452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.173807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.173818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.174181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.174191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.174402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.174413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.174784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.174795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.174852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.174861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.175055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.175065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.175422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.175433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.175674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.175684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.175885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.175898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.176256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.176267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.176618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.176628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.176977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.176988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.177217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.177228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.177423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-07-15 21:20:30.177435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-07-15 21:20:30.177869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.177881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.178071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.178081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.178450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.178459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.178790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.178799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.179170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.179181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.179599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.179609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.179823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.179832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.180004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.180013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.180360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.180370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.180706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.180716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.180770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.180779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.181161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.181179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.181541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.181552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.181881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.181891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.182241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.182251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.182538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.182548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.182878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.182889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.183221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.183237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.183429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.183439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.183860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.183870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.184251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.184261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.184627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.184637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.184917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.184927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.185280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.185291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.185697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.185707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.186045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.186054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.186424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.186436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.186632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.186642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.187037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.187047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.187407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.187417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.187663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.187672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.188017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.188026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.188397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.188407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.188650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.188660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.188910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.188919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-07-15 21:20:30.189087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-07-15 21:20:30.189096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.189446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.189456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.189645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.189654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.190016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.190026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.190387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.190396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.190741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.190752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.191000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.191009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.191381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.191391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.191749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.191759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.191958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.191968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.192339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.192349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.192683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.192693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.193060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.193076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.193271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.193281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.193658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.193667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.194017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.194026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.194382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.194392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.194735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.194744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.194953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.194963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.195223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.195240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.195583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.195593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.195983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.195993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.196345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.196354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.196696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.196705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.197031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.197040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.197401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.197410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.197600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.197609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.197818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.197827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.198190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.198199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.198303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.198313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.198545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.198554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.199016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.199026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.199224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.199239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.199580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.199590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.199919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.199929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.200288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.200297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.200515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.200525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.200891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.200900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.201291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.201301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.201588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.201599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.201962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.201971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-07-15 21:20:30.202348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-07-15 21:20:30.202358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.202717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.202734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.203104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.203113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.203456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.203466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.203817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.203826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.204177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.204186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.204525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.204535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.204884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.204893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.205311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.205320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.205537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.205547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.205917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.205926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.206176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.206185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.206405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.206415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.206697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.206706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.206891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.206900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.207254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.207263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.207528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.207537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.207903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.207913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.208123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.208135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.208490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.208501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.208872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.208882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.209269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.209279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.209478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.209487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.209903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.209912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.210251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.210261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.210603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.210612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.210960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.210969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.211221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.211234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.211597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.211606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.211933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.211943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.212333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.212344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.212555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.212565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.212761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.212770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.213187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.213196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.213400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.213409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.213790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.213800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.214128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.214137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.214331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.214341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.214645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.214655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.214909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.214918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.215291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.215300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.215656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-07-15 21:20:30.215671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-07-15 21:20:30.216054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.216063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.216276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.216287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.216656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.216666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.216996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.217008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.217361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.217372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.217724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.217734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.218105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.218115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.218435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.218446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.218655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.218664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.218993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.219003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.219200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.219209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.219501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.219511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.219726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.219735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.220075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.220086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.220309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.220319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.220681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.220691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.221036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.221045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.221370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.221380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.221735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.221744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.222072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.222081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.222443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.222453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.222800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.222810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.223185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.223194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.223370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.223379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.223671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.223680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.224048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.224057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.224441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.224451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.224695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.224704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.225047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.225056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.225413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.225422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.225833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.225844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.226047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.226057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.226320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.226330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.226704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.226714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.226922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.226931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.227154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.227163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.227577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.227586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.227788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.227797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.227986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.227996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.228339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.228349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-07-15 21:20:30.228704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-07-15 21:20:30.228713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:02.990 [2024-07-15 21:20:30.228910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.228921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.229159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.229169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:02.991 [2024-07-15 21:20:30.229507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.229520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:02.991 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:02.991 [2024-07-15 21:20:30.229854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.229865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:02.991 [2024-07-15 21:20:30.230201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.230212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.230359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.230369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.230575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.230584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.230887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.230897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.231268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.231280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.231623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.231633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.231996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.232005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.232384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.232393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.232608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.232617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.233079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.233089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.233465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.233475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.233847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.233857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.234218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.234234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.234562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.234572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.234941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.234951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.235275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.235285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.235639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.235648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.235990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.235999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.236304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.236315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.236655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.236664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.237005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.237015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.237380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.237390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.237642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.237653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.237866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.237875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.238227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.238244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.238648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.238658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.238994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.239004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.239311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.991 [2024-07-15 21:20:30.239321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.991 qpair failed and we were unable to recover it. 00:30:02.991 [2024-07-15 21:20:30.239688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.992 [2024-07-15 21:20:30.239698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:02.992 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-15 21:20:30.240070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-15 21:20:30.240081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-15 21:20:30.240594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-15 21:20:30.240605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.240789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.240798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.241153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.241162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.241410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.241420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.241758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.241768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.242106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.242115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.242365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.242375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.242632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.242642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.242846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.242856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.243177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.243188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.243548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.243564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.243928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.243937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.243995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.244004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.244316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.244325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.244682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.244691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.245014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.245022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.245365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.245376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.245577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.245587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.245894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.245903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.246265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.246276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.246665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.246675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.247003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.247016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.247471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.247480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.247833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.247843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.248218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.248228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.248387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.248396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.248778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.248789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.248988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.248998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.249379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.249388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.249446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.249454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.249643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.249653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.249966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.249976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.250340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.250350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.250720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.250730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.250929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.250938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.251300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.251311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.251665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.251675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.251874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.251884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.252194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.252204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.252423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.252433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-15 21:20:30.252696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-15 21:20:30.252706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.253054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.253063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.253235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.253244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.253436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.253444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.253754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.253764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.254084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.254094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.254434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.254445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.254798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.254807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.255201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.255212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.255566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.255575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.255792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.255802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.256051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.256060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.256394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.256404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.256780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.256790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.257141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.257151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.257333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.257343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.257652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.257661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.257917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.257927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.258278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.258288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.258625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.258635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.258964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.258973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.259340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.259350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.259643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.259653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.260000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.260010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.260339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.260349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.260706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.260715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.261066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.261075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.261253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.261263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.261508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.261517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.261877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.261886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.262084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.262092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.262436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.262446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.262735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.262745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.262922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.262931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.263308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.263317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.263690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.263700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.263967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.263977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.264340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.264350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.264576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.264586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.264934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.264943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.265264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.265273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-15 21:20:30.265525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-15 21:20:30.265537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.265783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.265793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.265986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.265996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.266227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.266249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.266464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.266473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.266770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.266780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.267135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.267144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.267487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.267496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.267863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.267873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.268219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.268228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.268640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.268651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.268982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.268991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.269214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.269225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:03.262 [2024-07-15 21:20:30.269602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.269614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.269935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.269945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:03.262 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.262 [2024-07-15 21:20:30.270341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.270353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:03.262 [2024-07-15 21:20:30.270692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.270703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.271071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.271082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.271435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.271445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.271782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.271791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.272191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.272201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.272520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.272530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.272875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.272884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.273210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.273219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.273422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.273432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.273496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.273505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.273864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.273873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.274082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.274092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.274436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.274446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.274783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.274792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.275120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.275129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.275531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.275541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.275864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.275873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.276082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.276094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.276479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.276489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.276814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.276823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.277165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.277174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.277522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.277532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.277863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.277873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.278078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-15 21:20:30.278088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-15 21:20:30.278443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.278454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.278771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.278782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.279137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.279146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.279569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.279579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.279792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.279802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.280175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.280185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.280520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.280530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.280887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.280898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.281100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.281110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.281332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.281341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.281683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.281693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.281913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.281923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.282277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.282288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.282521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.282531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.282876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.282887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.283086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.283096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.283285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.283295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.283409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.283419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.283796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.283805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.284133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.284143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.284469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.284479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.284729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.284739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.285117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.285126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.285501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.285511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.285720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.285729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.286092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.286102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.286321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.286331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.286680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.286690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.287042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.287051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.287386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.287396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.287621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.287629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.287968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.287978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.288252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.288262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.288520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.288530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 Malloc0 00:30:03.263 [2024-07-15 21:20:30.288870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.288879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.289213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.289222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.263 [2024-07-15 21:20:30.289459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.289470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-15 21:20:30.289691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.289700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:03.263 [2024-07-15 21:20:30.290021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.290031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.263 [2024-07-15 21:20:30.290234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-15 21:20:30.290244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:03.264 [2024-07-15 21:20:30.290612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.290622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.290931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.290941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.291279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.291289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.291569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.291578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.291694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.291703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.291993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.292002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.292353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.292363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.292691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.292700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.293033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.293042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.293400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.293409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.293618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.293627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.294035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.294044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.294348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.294358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.294721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.294730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.295077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.295086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.295310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.295330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.295587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.295597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.295905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.295914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.296084] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:03.264 [2024-07-15 21:20:30.296104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.296113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.296549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.296561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.296890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.296899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.297245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.297254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.297506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.297515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.297882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.297891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.298218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.298226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.298488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.298497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.298717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.298726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.298961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.298971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.299336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.299346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-15 21:20:30.299634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-15 21:20:30.299644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.299965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.299974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.300299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.300308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.300550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.300560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.300915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.300925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.301236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.301247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.301584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.301593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.301786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.301795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.302161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.302170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.302403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.302413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.302791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.302801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.303130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.303139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.303503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.303514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.303862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.303872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.304065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.304074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.304386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.304396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.304754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.304764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.265 [2024-07-15 21:20:30.305087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.305103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.305457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.305468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:03.265 [2024-07-15 21:20:30.305556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.305566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.265 [2024-07-15 21:20:30.305870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.305880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:03.265 [2024-07-15 21:20:30.306219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.306228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.306584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.306593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.306940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.306950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.307162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.307172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.307511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.307521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.307866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.307875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.308078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.308088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.308438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.308448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.308793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.308802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.309001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.309010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.309352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.309362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.309716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.309726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.310078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.310087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.310288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.310297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.310690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.310699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.311022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.311032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.311236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.311246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.311654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.311663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.311867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.311876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-15 21:20:30.312223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-15 21:20:30.312239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.312598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.312608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.312828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.312838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.313180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.313190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.313521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.313531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.313712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.313721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.314088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.314097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.314437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.314446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.314884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.314893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.315251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.315261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.315527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.315537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.315740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.315749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.315955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.315965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.316272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.316281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.316634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.316644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.316976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.316985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.266 [2024-07-15 21:20:30.317290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.317300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:03.266 [2024-07-15 21:20:30.317667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.317677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.266 [2024-07-15 21:20:30.318042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.318052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:03.266 [2024-07-15 21:20:30.318389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.318398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.318706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.318715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.319045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.319055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.319484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.319494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.319838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.319847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.320150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.320159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.320507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.320516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.320844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.320853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.321260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.321269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.321637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.321647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.321924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.321933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.322101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.322110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.322499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.322508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.322832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.322841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.323194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.323203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.323597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.323607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.323936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.323945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.324308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.324318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.324531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.324540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.324884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.324894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.325254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-15 21:20:30.325264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-15 21:20:30.325624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.325633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.325693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.325701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.325937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.325947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.326294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.326303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.326676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.326685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.327026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.327035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.327251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.327261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.327537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.327546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.327904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.327914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.328236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.328246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.328587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.328596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.328963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.328972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.267 [2024-07-15 21:20:30.329300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.329311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:03.267 [2024-07-15 21:20:30.329675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.329685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.267 [2024-07-15 21:20:30.329935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.329945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.330107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:03.267 [2024-07-15 21:20:30.330117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.330494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.330504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.330764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.330773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.330986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.330996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.331366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.331382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.331697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.331706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.332073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.332082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.332410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.332420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.332741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.332750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.333126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.333135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.333494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.333503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.333865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.333875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.334266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.334277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.334646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.334656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.334981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.334991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.335165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.335174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.335495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.335505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.335708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.335717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.335910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.335919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.336136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-15 21:20:30.336146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240fa50 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-15 21:20:30.336338] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.267 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.267 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:03.267 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.267 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:03.267 [2024-07-15 21:20:30.346878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.267 [2024-07-15 21:20:30.346960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.267 [2024-07-15 21:20:30.346978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.267 [2024-07-15 21:20:30.346986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.267 [2024-07-15 21:20:30.346993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.267 [2024-07-15 21:20:30.347012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.268 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.268 21:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2165584 00:30:03.268 [2024-07-15 21:20:30.356884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.268 [2024-07-15 21:20:30.356956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.268 [2024-07-15 21:20:30.356973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.268 [2024-07-15 21:20:30.356979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.268 [2024-07-15 21:20:30.356986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.268 [2024-07-15 21:20:30.357001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-15 21:20:30.366904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.268 [2024-07-15 21:20:30.366973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.268 [2024-07-15 21:20:30.366989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.268 [2024-07-15 21:20:30.366996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.268 [2024-07-15 21:20:30.367002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.268 [2024-07-15 21:20:30.367017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-15 21:20:30.376850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.268 [2024-07-15 21:20:30.376918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.268 [2024-07-15 21:20:30.376936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.268 [2024-07-15 21:20:30.376944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.268 [2024-07-15 21:20:30.376950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.268 [2024-07-15 21:20:30.376965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-15 21:20:30.386843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.268 [2024-07-15 21:20:30.386966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.268 [2024-07-15 21:20:30.386982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.268 [2024-07-15 21:20:30.386990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.268 [2024-07-15 21:20:30.386996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.268 [2024-07-15 21:20:30.387011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-15 21:20:30.396877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.268 [2024-07-15 21:20:30.396941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.268 [2024-07-15 21:20:30.396964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.268 [2024-07-15 21:20:30.396972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.268 [2024-07-15 21:20:30.396978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.268 [2024-07-15 21:20:30.396993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-15 21:20:30.406916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.268 [2024-07-15 21:20:30.406981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.268 [2024-07-15 21:20:30.406997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.268 [2024-07-15 21:20:30.407004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.268 [2024-07-15 21:20:30.407010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.268 [2024-07-15 21:20:30.407025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-15 21:20:30.416819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.268 [2024-07-15 21:20:30.416882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.268 [2024-07-15 21:20:30.416900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.268 [2024-07-15 21:20:30.416907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.268 [2024-07-15 21:20:30.416913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.268 [2024-07-15 21:20:30.416929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-15 21:20:30.426958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.268 [2024-07-15 21:20:30.427024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.268 [2024-07-15 21:20:30.427040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.268 [2024-07-15 21:20:30.427047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.268 [2024-07-15 21:20:30.427053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.268 [2024-07-15 21:20:30.427067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-15 21:20:30.436972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.268 [2024-07-15 21:20:30.437038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.268 [2024-07-15 21:20:30.437054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.268 [2024-07-15 21:20:30.437060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.268 [2024-07-15 21:20:30.437066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.268 [2024-07-15 21:20:30.437085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-15 21:20:30.447004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.268 [2024-07-15 21:20:30.447113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.268 [2024-07-15 21:20:30.447129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.268 [2024-07-15 21:20:30.447136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.268 [2024-07-15 21:20:30.447142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.268 [2024-07-15 21:20:30.447156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-15 21:20:30.457043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.268 [2024-07-15 21:20:30.457133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.268 [2024-07-15 21:20:30.457148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.268 [2024-07-15 21:20:30.457155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.268 [2024-07-15 21:20:30.457162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.268 [2024-07-15 21:20:30.457175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-15 21:20:30.467076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.268 [2024-07-15 21:20:30.467148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.268 [2024-07-15 21:20:30.467163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.268 [2024-07-15 21:20:30.467170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.268 [2024-07-15 21:20:30.467176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.268 [2024-07-15 21:20:30.467190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-15 21:20:30.477113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.269 [2024-07-15 21:20:30.477179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.269 [2024-07-15 21:20:30.477194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.269 [2024-07-15 21:20:30.477201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.269 [2024-07-15 21:20:30.477207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.269 [2024-07-15 21:20:30.477221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-15 21:20:30.487144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.269 [2024-07-15 21:20:30.487208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.269 [2024-07-15 21:20:30.487226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.269 [2024-07-15 21:20:30.487237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.269 [2024-07-15 21:20:30.487243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.269 [2024-07-15 21:20:30.487257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-15 21:20:30.497106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.269 [2024-07-15 21:20:30.497169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.269 [2024-07-15 21:20:30.497184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.269 [2024-07-15 21:20:30.497190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.269 [2024-07-15 21:20:30.497196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.269 [2024-07-15 21:20:30.497210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-15 21:20:30.507088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.269 [2024-07-15 21:20:30.507160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.269 [2024-07-15 21:20:30.507175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.269 [2024-07-15 21:20:30.507183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.269 [2024-07-15 21:20:30.507189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.269 [2024-07-15 21:20:30.507203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-15 21:20:30.517209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.269 [2024-07-15 21:20:30.517283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.269 [2024-07-15 21:20:30.517299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.269 [2024-07-15 21:20:30.517306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.269 [2024-07-15 21:20:30.517312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.269 [2024-07-15 21:20:30.517325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-15 21:20:30.527249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.269 [2024-07-15 21:20:30.527316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.269 [2024-07-15 21:20:30.527331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.269 [2024-07-15 21:20:30.527337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.269 [2024-07-15 21:20:30.527344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.269 [2024-07-15 21:20:30.527361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-15 21:20:30.537257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.269 [2024-07-15 21:20:30.537322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.269 [2024-07-15 21:20:30.537338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.269 [2024-07-15 21:20:30.537345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.269 [2024-07-15 21:20:30.537353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.269 [2024-07-15 21:20:30.537367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.532 [2024-07-15 21:20:30.547310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.532 [2024-07-15 21:20:30.547382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.532 [2024-07-15 21:20:30.547397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.532 [2024-07-15 21:20:30.547404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.532 [2024-07-15 21:20:30.547410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.532 [2024-07-15 21:20:30.547424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.532 qpair failed and we were unable to recover it. 00:30:03.532 [2024-07-15 21:20:30.557357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.532 [2024-07-15 21:20:30.557513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.532 [2024-07-15 21:20:30.557528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.532 [2024-07-15 21:20:30.557535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.532 [2024-07-15 21:20:30.557541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.532 [2024-07-15 21:20:30.557555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.532 qpair failed and we were unable to recover it. 00:30:03.532 [2024-07-15 21:20:30.567365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.532 [2024-07-15 21:20:30.567431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.532 [2024-07-15 21:20:30.567446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.532 [2024-07-15 21:20:30.567454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.533 [2024-07-15 21:20:30.567460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.533 [2024-07-15 21:20:30.567474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.533 qpair failed and we were unable to recover it. 00:30:03.533 [2024-07-15 21:20:30.577360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.533 [2024-07-15 21:20:30.577421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.533 [2024-07-15 21:20:30.577443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.533 [2024-07-15 21:20:30.577451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.533 [2024-07-15 21:20:30.577457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.533 [2024-07-15 21:20:30.577470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.533 qpair failed and we were unable to recover it. 00:30:03.533 [2024-07-15 21:20:30.587413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.533 [2024-07-15 21:20:30.587483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.533 [2024-07-15 21:20:30.587498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.533 [2024-07-15 21:20:30.587505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.533 [2024-07-15 21:20:30.587511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.533 [2024-07-15 21:20:30.587525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.533 qpair failed and we were unable to recover it. 00:30:03.533 [2024-07-15 21:20:30.597432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.533 [2024-07-15 21:20:30.597500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.533 [2024-07-15 21:20:30.597515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.533 [2024-07-15 21:20:30.597522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.533 [2024-07-15 21:20:30.597528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.533 [2024-07-15 21:20:30.597541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.533 qpair failed and we were unable to recover it. 00:30:03.533 [2024-07-15 21:20:30.607520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.533 [2024-07-15 21:20:30.607602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.533 [2024-07-15 21:20:30.607617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.533 [2024-07-15 21:20:30.607623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.533 [2024-07-15 21:20:30.607629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.533 [2024-07-15 21:20:30.607643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.533 qpair failed and we were unable to recover it. 00:30:03.533 [2024-07-15 21:20:30.617485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.533 [2024-07-15 21:20:30.617573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.533 [2024-07-15 21:20:30.617588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.533 [2024-07-15 21:20:30.617594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.533 [2024-07-15 21:20:30.617604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.533 [2024-07-15 21:20:30.617618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.533 qpair failed and we were unable to recover it. 00:30:03.533 [2024-07-15 21:20:30.627553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.533 [2024-07-15 21:20:30.627637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.533 [2024-07-15 21:20:30.627654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.533 [2024-07-15 21:20:30.627661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.533 [2024-07-15 21:20:30.627667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.533 [2024-07-15 21:20:30.627682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.533 qpair failed and we were unable to recover it. 00:30:03.533 [2024-07-15 21:20:30.637544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.533 [2024-07-15 21:20:30.637610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.533 [2024-07-15 21:20:30.637626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.533 [2024-07-15 21:20:30.637633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.533 [2024-07-15 21:20:30.637639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.533 [2024-07-15 21:20:30.637653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.533 qpair failed and we were unable to recover it. 00:30:03.533 [2024-07-15 21:20:30.647611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.533 [2024-07-15 21:20:30.647671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.533 [2024-07-15 21:20:30.647686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.533 [2024-07-15 21:20:30.647693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.533 [2024-07-15 21:20:30.647699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.533 [2024-07-15 21:20:30.647713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.533 qpair failed and we were unable to recover it. 00:30:03.533 [2024-07-15 21:20:30.657519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.533 [2024-07-15 21:20:30.657591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.533 [2024-07-15 21:20:30.657606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.533 [2024-07-15 21:20:30.657613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.533 [2024-07-15 21:20:30.657619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.533 [2024-07-15 21:20:30.657632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.533 qpair failed and we were unable to recover it. 00:30:03.533 [2024-07-15 21:20:30.667610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.533 [2024-07-15 21:20:30.667687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.533 [2024-07-15 21:20:30.667703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.533 [2024-07-15 21:20:30.667710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.533 [2024-07-15 21:20:30.667716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.533 [2024-07-15 21:20:30.667730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.533 qpair failed and we were unable to recover it. 00:30:03.533 [2024-07-15 21:20:30.677643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.533 [2024-07-15 21:20:30.677707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.533 [2024-07-15 21:20:30.677722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.534 [2024-07-15 21:20:30.677730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.534 [2024-07-15 21:20:30.677736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.534 [2024-07-15 21:20:30.677750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.534 qpair failed and we were unable to recover it. 00:30:03.534 [2024-07-15 21:20:30.687658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.534 [2024-07-15 21:20:30.687724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.534 [2024-07-15 21:20:30.687739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.534 [2024-07-15 21:20:30.687746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.534 [2024-07-15 21:20:30.687752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.534 [2024-07-15 21:20:30.687766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.534 qpair failed and we were unable to recover it. 00:30:03.534 [2024-07-15 21:20:30.697687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.534 [2024-07-15 21:20:30.697756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.534 [2024-07-15 21:20:30.697771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.534 [2024-07-15 21:20:30.697778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.534 [2024-07-15 21:20:30.697784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.534 [2024-07-15 21:20:30.697797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.534 qpair failed and we were unable to recover it. 00:30:03.534 [2024-07-15 21:20:30.707719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.534 [2024-07-15 21:20:30.707791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.534 [2024-07-15 21:20:30.707806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.534 [2024-07-15 21:20:30.707813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.534 [2024-07-15 21:20:30.707823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.534 [2024-07-15 21:20:30.707836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.534 qpair failed and we were unable to recover it. 00:30:03.534 [2024-07-15 21:20:30.717746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.534 [2024-07-15 21:20:30.717812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.534 [2024-07-15 21:20:30.717827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.534 [2024-07-15 21:20:30.717834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.534 [2024-07-15 21:20:30.717840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.534 [2024-07-15 21:20:30.717853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.534 qpair failed and we were unable to recover it. 00:30:03.534 [2024-07-15 21:20:30.727785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.534 [2024-07-15 21:20:30.727850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.534 [2024-07-15 21:20:30.727865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.534 [2024-07-15 21:20:30.727871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.534 [2024-07-15 21:20:30.727877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.534 [2024-07-15 21:20:30.727891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.534 qpair failed and we were unable to recover it. 00:30:03.534 [2024-07-15 21:20:30.737841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.534 [2024-07-15 21:20:30.737937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.534 [2024-07-15 21:20:30.737952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.534 [2024-07-15 21:20:30.737959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.534 [2024-07-15 21:20:30.737965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.534 [2024-07-15 21:20:30.737978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.534 qpair failed and we were unable to recover it. 00:30:03.534 [2024-07-15 21:20:30.747849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.534 [2024-07-15 21:20:30.747926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.534 [2024-07-15 21:20:30.747951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.534 [2024-07-15 21:20:30.747959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.534 [2024-07-15 21:20:30.747966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.534 [2024-07-15 21:20:30.747985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.534 qpair failed and we were unable to recover it. 00:30:03.534 [2024-07-15 21:20:30.757765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.534 [2024-07-15 21:20:30.757830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.534 [2024-07-15 21:20:30.757848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.534 [2024-07-15 21:20:30.757855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.534 [2024-07-15 21:20:30.757862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.534 [2024-07-15 21:20:30.757877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.534 qpair failed and we were unable to recover it. 00:30:03.534 [2024-07-15 21:20:30.767912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.534 [2024-07-15 21:20:30.767976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.534 [2024-07-15 21:20:30.767992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.534 [2024-07-15 21:20:30.767999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.534 [2024-07-15 21:20:30.768005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.534 [2024-07-15 21:20:30.768020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.534 qpair failed and we were unable to recover it. 00:30:03.534 [2024-07-15 21:20:30.777937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.534 [2024-07-15 21:20:30.778012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.534 [2024-07-15 21:20:30.778038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.534 [2024-07-15 21:20:30.778046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.534 [2024-07-15 21:20:30.778053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.534 [2024-07-15 21:20:30.778071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.534 qpair failed and we were unable to recover it. 00:30:03.534 [2024-07-15 21:20:30.787964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.534 [2024-07-15 21:20:30.788038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.534 [2024-07-15 21:20:30.788063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.534 [2024-07-15 21:20:30.788071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.534 [2024-07-15 21:20:30.788077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.534 [2024-07-15 21:20:30.788096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.534 qpair failed and we were unable to recover it. 00:30:03.534 [2024-07-15 21:20:30.797986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.534 [2024-07-15 21:20:30.798048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.534 [2024-07-15 21:20:30.798066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.534 [2024-07-15 21:20:30.798080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.534 [2024-07-15 21:20:30.798087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.534 [2024-07-15 21:20:30.798103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.534 qpair failed and we were unable to recover it. 00:30:03.534 [2024-07-15 21:20:30.808045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.534 [2024-07-15 21:20:30.808104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.534 [2024-07-15 21:20:30.808119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.534 [2024-07-15 21:20:30.808126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.534 [2024-07-15 21:20:30.808132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.534 [2024-07-15 21:20:30.808146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.534 qpair failed and we were unable to recover it. 00:30:03.534 [2024-07-15 21:20:30.817930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.534 [2024-07-15 21:20:30.817996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.534 [2024-07-15 21:20:30.818011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.534 [2024-07-15 21:20:30.818018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.534 [2024-07-15 21:20:30.818024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.534 [2024-07-15 21:20:30.818037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.534 qpair failed and we were unable to recover it. 00:30:03.797 [2024-07-15 21:20:30.828070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.797 [2024-07-15 21:20:30.828139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.797 [2024-07-15 21:20:30.828154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.797 [2024-07-15 21:20:30.828161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.797 [2024-07-15 21:20:30.828167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.797 [2024-07-15 21:20:30.828181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.797 qpair failed and we were unable to recover it. 00:30:03.797 [2024-07-15 21:20:30.838176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.797 [2024-07-15 21:20:30.838243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.797 [2024-07-15 21:20:30.838259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.797 [2024-07-15 21:20:30.838266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.797 [2024-07-15 21:20:30.838272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.797 [2024-07-15 21:20:30.838286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.797 qpair failed and we were unable to recover it. 00:30:03.797 [2024-07-15 21:20:30.848145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.797 [2024-07-15 21:20:30.848213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.797 [2024-07-15 21:20:30.848228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.797 [2024-07-15 21:20:30.848240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.797 [2024-07-15 21:20:30.848246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.797 [2024-07-15 21:20:30.848260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.797 qpair failed and we were unable to recover it. 00:30:03.797 [2024-07-15 21:20:30.858238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.797 [2024-07-15 21:20:30.858340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.797 [2024-07-15 21:20:30.858355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.797 [2024-07-15 21:20:30.858362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.797 [2024-07-15 21:20:30.858368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.797 [2024-07-15 21:20:30.858382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.797 qpair failed and we were unable to recover it. 00:30:03.797 [2024-07-15 21:20:30.868194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.797 [2024-07-15 21:20:30.868263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.797 [2024-07-15 21:20:30.868281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.797 [2024-07-15 21:20:30.868290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.797 [2024-07-15 21:20:30.868296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.797 [2024-07-15 21:20:30.868311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.797 qpair failed and we were unable to recover it. 00:30:03.797 [2024-07-15 21:20:30.878098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.797 [2024-07-15 21:20:30.878168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.797 [2024-07-15 21:20:30.878184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.797 [2024-07-15 21:20:30.878191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.797 [2024-07-15 21:20:30.878198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.797 [2024-07-15 21:20:30.878212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.797 qpair failed and we were unable to recover it. 00:30:03.797 [2024-07-15 21:20:30.888256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.798 [2024-07-15 21:20:30.888317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.798 [2024-07-15 21:20:30.888333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.798 [2024-07-15 21:20:30.888343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.798 [2024-07-15 21:20:30.888350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.798 [2024-07-15 21:20:30.888364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.798 qpair failed and we were unable to recover it. 00:30:03.798 [2024-07-15 21:20:30.898324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.798 [2024-07-15 21:20:30.898388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.798 [2024-07-15 21:20:30.898403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.798 [2024-07-15 21:20:30.898410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.798 [2024-07-15 21:20:30.898416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.798 [2024-07-15 21:20:30.898430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.798 qpair failed and we were unable to recover it. 00:30:03.798 [2024-07-15 21:20:30.908300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.798 [2024-07-15 21:20:30.908368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.798 [2024-07-15 21:20:30.908383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.798 [2024-07-15 21:20:30.908390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.798 [2024-07-15 21:20:30.908396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.798 [2024-07-15 21:20:30.908409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.798 qpair failed and we were unable to recover it. 00:30:03.798 [2024-07-15 21:20:30.918357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.798 [2024-07-15 21:20:30.918420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.798 [2024-07-15 21:20:30.918435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.798 [2024-07-15 21:20:30.918442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.798 [2024-07-15 21:20:30.918447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.798 [2024-07-15 21:20:30.918461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.798 qpair failed and we were unable to recover it. 00:30:03.798 [2024-07-15 21:20:30.928427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.798 [2024-07-15 21:20:30.928487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.798 [2024-07-15 21:20:30.928502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.798 [2024-07-15 21:20:30.928509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.798 [2024-07-15 21:20:30.928515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.798 [2024-07-15 21:20:30.928529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.798 qpair failed and we were unable to recover it. 00:30:03.798 [2024-07-15 21:20:30.938376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.798 [2024-07-15 21:20:30.938440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.798 [2024-07-15 21:20:30.938455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.798 [2024-07-15 21:20:30.938462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.798 [2024-07-15 21:20:30.938468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.798 [2024-07-15 21:20:30.938482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.798 qpair failed and we were unable to recover it. 00:30:03.798 [2024-07-15 21:20:30.948325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.798 [2024-07-15 21:20:30.948423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.798 [2024-07-15 21:20:30.948438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.798 [2024-07-15 21:20:30.948445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.798 [2024-07-15 21:20:30.948451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.798 [2024-07-15 21:20:30.948465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.798 qpair failed and we were unable to recover it. 00:30:03.798 [2024-07-15 21:20:30.958417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.798 [2024-07-15 21:20:30.958481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.798 [2024-07-15 21:20:30.958496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.798 [2024-07-15 21:20:30.958503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.798 [2024-07-15 21:20:30.958508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.798 [2024-07-15 21:20:30.958522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.798 qpair failed and we were unable to recover it. 00:30:03.798 [2024-07-15 21:20:30.968453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.798 [2024-07-15 21:20:30.968547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.798 [2024-07-15 21:20:30.968563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.798 [2024-07-15 21:20:30.968570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.798 [2024-07-15 21:20:30.968576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.798 [2024-07-15 21:20:30.968590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.798 qpair failed and we were unable to recover it. 00:30:03.798 [2024-07-15 21:20:30.978472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.798 [2024-07-15 21:20:30.978543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.798 [2024-07-15 21:20:30.978558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.798 [2024-07-15 21:20:30.978569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.798 [2024-07-15 21:20:30.978575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.798 [2024-07-15 21:20:30.978588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.798 qpair failed and we were unable to recover it. 00:30:03.798 [2024-07-15 21:20:30.988472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.798 [2024-07-15 21:20:30.988569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.798 [2024-07-15 21:20:30.988584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.798 [2024-07-15 21:20:30.988591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.798 [2024-07-15 21:20:30.988597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.798 [2024-07-15 21:20:30.988610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.798 qpair failed and we were unable to recover it. 00:30:03.798 [2024-07-15 21:20:30.998460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.798 [2024-07-15 21:20:30.998557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.798 [2024-07-15 21:20:30.998573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.798 [2024-07-15 21:20:30.998580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.798 [2024-07-15 21:20:30.998586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.798 [2024-07-15 21:20:30.998600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.798 qpair failed and we were unable to recover it. 00:30:03.798 [2024-07-15 21:20:31.008467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.798 [2024-07-15 21:20:31.008532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.798 [2024-07-15 21:20:31.008547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.798 [2024-07-15 21:20:31.008554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.798 [2024-07-15 21:20:31.008560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.798 [2024-07-15 21:20:31.008574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.798 qpair failed and we were unable to recover it. 00:30:03.798 [2024-07-15 21:20:31.018492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.798 [2024-07-15 21:20:31.018554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.798 [2024-07-15 21:20:31.018570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.798 [2024-07-15 21:20:31.018576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.798 [2024-07-15 21:20:31.018583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.798 [2024-07-15 21:20:31.018596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.798 qpair failed and we were unable to recover it. 00:30:03.798 [2024-07-15 21:20:31.028640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.798 [2024-07-15 21:20:31.028722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.798 [2024-07-15 21:20:31.028737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.798 [2024-07-15 21:20:31.028744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.798 [2024-07-15 21:20:31.028750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.798 [2024-07-15 21:20:31.028764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.798 qpair failed and we were unable to recover it. 00:30:03.798 [2024-07-15 21:20:31.038645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.798 [2024-07-15 21:20:31.038712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.798 [2024-07-15 21:20:31.038727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.798 [2024-07-15 21:20:31.038733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.798 [2024-07-15 21:20:31.038739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.798 [2024-07-15 21:20:31.038753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.798 qpair failed and we were unable to recover it. 00:30:03.798 [2024-07-15 21:20:31.048574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.798 [2024-07-15 21:20:31.048638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.798 [2024-07-15 21:20:31.048652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.798 [2024-07-15 21:20:31.048659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.798 [2024-07-15 21:20:31.048666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.798 [2024-07-15 21:20:31.048679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.798 qpair failed and we were unable to recover it. 00:30:03.798 [2024-07-15 21:20:31.058704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.798 [2024-07-15 21:20:31.058801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.798 [2024-07-15 21:20:31.058817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.798 [2024-07-15 21:20:31.058824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.798 [2024-07-15 21:20:31.058830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.798 [2024-07-15 21:20:31.058844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.798 qpair failed and we were unable to recover it. 00:30:03.798 [2024-07-15 21:20:31.068722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.798 [2024-07-15 21:20:31.068788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.798 [2024-07-15 21:20:31.068807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.798 [2024-07-15 21:20:31.068815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.798 [2024-07-15 21:20:31.068821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.798 [2024-07-15 21:20:31.068834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.798 qpair failed and we were unable to recover it. 00:30:03.798 [2024-07-15 21:20:31.078750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.798 [2024-07-15 21:20:31.078858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.798 [2024-07-15 21:20:31.078873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.798 [2024-07-15 21:20:31.078881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.798 [2024-07-15 21:20:31.078887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:03.798 [2024-07-15 21:20:31.078900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.798 qpair failed and we were unable to recover it. 00:30:04.061 [2024-07-15 21:20:31.088815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.061 [2024-07-15 21:20:31.088903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.061 [2024-07-15 21:20:31.088918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.061 [2024-07-15 21:20:31.088925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.061 [2024-07-15 21:20:31.088932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.061 [2024-07-15 21:20:31.088945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.061 qpair failed and we were unable to recover it. 00:30:04.061 [2024-07-15 21:20:31.098828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.061 [2024-07-15 21:20:31.098890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.061 [2024-07-15 21:20:31.098906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.061 [2024-07-15 21:20:31.098913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.061 [2024-07-15 21:20:31.098919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.061 [2024-07-15 21:20:31.098932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.061 qpair failed and we were unable to recover it. 00:30:04.061 [2024-07-15 21:20:31.108897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.061 [2024-07-15 21:20:31.108983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.061 [2024-07-15 21:20:31.108999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.061 [2024-07-15 21:20:31.109006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.061 [2024-07-15 21:20:31.109012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.061 [2024-07-15 21:20:31.109026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.061 qpair failed and we were unable to recover it. 00:30:04.061 [2024-07-15 21:20:31.118801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.061 [2024-07-15 21:20:31.118860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.061 [2024-07-15 21:20:31.118875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.061 [2024-07-15 21:20:31.118882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.061 [2024-07-15 21:20:31.118888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.061 [2024-07-15 21:20:31.118901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.061 qpair failed and we were unable to recover it. 00:30:04.061 [2024-07-15 21:20:31.128958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.061 [2024-07-15 21:20:31.129026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.061 [2024-07-15 21:20:31.129044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.061 [2024-07-15 21:20:31.129052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.061 [2024-07-15 21:20:31.129058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.061 [2024-07-15 21:20:31.129073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.061 qpair failed and we were unable to recover it. 00:30:04.061 [2024-07-15 21:20:31.138918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.061 [2024-07-15 21:20:31.138990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.061 [2024-07-15 21:20:31.139015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.061 [2024-07-15 21:20:31.139023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.061 [2024-07-15 21:20:31.139030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.061 [2024-07-15 21:20:31.139048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.061 qpair failed and we were unable to recover it. 00:30:04.061 [2024-07-15 21:20:31.148848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.061 [2024-07-15 21:20:31.148925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.061 [2024-07-15 21:20:31.148950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.061 [2024-07-15 21:20:31.148958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.061 [2024-07-15 21:20:31.148965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.061 [2024-07-15 21:20:31.148983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.061 qpair failed and we were unable to recover it. 00:30:04.061 [2024-07-15 21:20:31.158985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.061 [2024-07-15 21:20:31.159051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.061 [2024-07-15 21:20:31.159080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.061 [2024-07-15 21:20:31.159089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.061 [2024-07-15 21:20:31.159096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.061 [2024-07-15 21:20:31.159115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.061 qpair failed and we were unable to recover it. 00:30:04.061 [2024-07-15 21:20:31.169013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.061 [2024-07-15 21:20:31.169079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.061 [2024-07-15 21:20:31.169096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.061 [2024-07-15 21:20:31.169104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.061 [2024-07-15 21:20:31.169110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.061 [2024-07-15 21:20:31.169125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.061 qpair failed and we were unable to recover it. 00:30:04.061 [2024-07-15 21:20:31.179035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.062 [2024-07-15 21:20:31.179103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.062 [2024-07-15 21:20:31.179119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.062 [2024-07-15 21:20:31.179126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.062 [2024-07-15 21:20:31.179132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.062 [2024-07-15 21:20:31.179146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.062 qpair failed and we were unable to recover it. 00:30:04.062 [2024-07-15 21:20:31.189057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.062 [2024-07-15 21:20:31.189125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.062 [2024-07-15 21:20:31.189141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.062 [2024-07-15 21:20:31.189147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.062 [2024-07-15 21:20:31.189153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.062 [2024-07-15 21:20:31.189167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.062 qpair failed and we were unable to recover it. 00:30:04.062 [2024-07-15 21:20:31.199080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.062 [2024-07-15 21:20:31.199141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.062 [2024-07-15 21:20:31.199156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.062 [2024-07-15 21:20:31.199163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.062 [2024-07-15 21:20:31.199169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.062 [2024-07-15 21:20:31.199186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.062 qpair failed and we were unable to recover it. 00:30:04.062 [2024-07-15 21:20:31.209141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.062 [2024-07-15 21:20:31.209220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.062 [2024-07-15 21:20:31.209240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.062 [2024-07-15 21:20:31.209247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.062 [2024-07-15 21:20:31.209253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.062 [2024-07-15 21:20:31.209267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.062 qpair failed and we were unable to recover it. 00:30:04.062 [2024-07-15 21:20:31.219233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.062 [2024-07-15 21:20:31.219298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.062 [2024-07-15 21:20:31.219313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.062 [2024-07-15 21:20:31.219320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.062 [2024-07-15 21:20:31.219326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.062 [2024-07-15 21:20:31.219340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.062 qpair failed and we were unable to recover it. 00:30:04.062 [2024-07-15 21:20:31.229162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.062 [2024-07-15 21:20:31.229238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.062 [2024-07-15 21:20:31.229253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.062 [2024-07-15 21:20:31.229260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.062 [2024-07-15 21:20:31.229266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.062 [2024-07-15 21:20:31.229280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.062 qpair failed and we were unable to recover it. 00:30:04.062 [2024-07-15 21:20:31.239254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.062 [2024-07-15 21:20:31.239312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.062 [2024-07-15 21:20:31.239328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.062 [2024-07-15 21:20:31.239335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.062 [2024-07-15 21:20:31.239341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.062 [2024-07-15 21:20:31.239355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.062 qpair failed and we were unable to recover it. 00:30:04.062 [2024-07-15 21:20:31.249208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.062 [2024-07-15 21:20:31.249273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.062 [2024-07-15 21:20:31.249291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.062 [2024-07-15 21:20:31.249298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.062 [2024-07-15 21:20:31.249304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.062 [2024-07-15 21:20:31.249318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.062 qpair failed and we were unable to recover it. 00:30:04.062 [2024-07-15 21:20:31.259254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.062 [2024-07-15 21:20:31.259323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.062 [2024-07-15 21:20:31.259338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.062 [2024-07-15 21:20:31.259345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.062 [2024-07-15 21:20:31.259351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.062 [2024-07-15 21:20:31.259364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.062 qpair failed and we were unable to recover it. 00:30:04.062 [2024-07-15 21:20:31.269264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.062 [2024-07-15 21:20:31.269363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.062 [2024-07-15 21:20:31.269379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.062 [2024-07-15 21:20:31.269386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.062 [2024-07-15 21:20:31.269392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.062 [2024-07-15 21:20:31.269406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.062 qpair failed and we were unable to recover it. 00:30:04.062 [2024-07-15 21:20:31.279296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.062 [2024-07-15 21:20:31.279359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.062 [2024-07-15 21:20:31.279374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.062 [2024-07-15 21:20:31.279381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.062 [2024-07-15 21:20:31.279387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.062 [2024-07-15 21:20:31.279401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.062 qpair failed and we were unable to recover it. 00:30:04.062 [2024-07-15 21:20:31.289314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.062 [2024-07-15 21:20:31.289378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.062 [2024-07-15 21:20:31.289393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.062 [2024-07-15 21:20:31.289400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.062 [2024-07-15 21:20:31.289406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.062 [2024-07-15 21:20:31.289423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.062 qpair failed and we were unable to recover it. 00:30:04.062 [2024-07-15 21:20:31.299382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.062 [2024-07-15 21:20:31.299446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.062 [2024-07-15 21:20:31.299462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.062 [2024-07-15 21:20:31.299469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.062 [2024-07-15 21:20:31.299475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.062 [2024-07-15 21:20:31.299489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.062 qpair failed and we were unable to recover it. 00:30:04.062 [2024-07-15 21:20:31.309371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.062 [2024-07-15 21:20:31.309438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.062 [2024-07-15 21:20:31.309453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.062 [2024-07-15 21:20:31.309459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.062 [2024-07-15 21:20:31.309466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.062 [2024-07-15 21:20:31.309479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.062 qpair failed and we were unable to recover it. 00:30:04.062 [2024-07-15 21:20:31.319290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.062 [2024-07-15 21:20:31.319408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.062 [2024-07-15 21:20:31.319423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.062 [2024-07-15 21:20:31.319430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.063 [2024-07-15 21:20:31.319436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.063 [2024-07-15 21:20:31.319450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.063 qpair failed and we were unable to recover it. 00:30:04.063 [2024-07-15 21:20:31.329461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.063 [2024-07-15 21:20:31.329547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.063 [2024-07-15 21:20:31.329562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.063 [2024-07-15 21:20:31.329569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.063 [2024-07-15 21:20:31.329575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.063 [2024-07-15 21:20:31.329589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.063 qpair failed and we were unable to recover it. 00:30:04.063 [2024-07-15 21:20:31.339469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.063 [2024-07-15 21:20:31.339557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.063 [2024-07-15 21:20:31.339578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.063 [2024-07-15 21:20:31.339585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.063 [2024-07-15 21:20:31.339591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.063 [2024-07-15 21:20:31.339605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.063 qpair failed and we were unable to recover it. 00:30:04.063 [2024-07-15 21:20:31.349483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.063 [2024-07-15 21:20:31.349570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.063 [2024-07-15 21:20:31.349585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.063 [2024-07-15 21:20:31.349592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.063 [2024-07-15 21:20:31.349598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.063 [2024-07-15 21:20:31.349611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.063 qpair failed and we were unable to recover it. 00:30:04.325 [2024-07-15 21:20:31.359532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.325 [2024-07-15 21:20:31.359614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.325 [2024-07-15 21:20:31.359629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.325 [2024-07-15 21:20:31.359636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.325 [2024-07-15 21:20:31.359641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.325 [2024-07-15 21:20:31.359655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.325 qpair failed and we were unable to recover it. 00:30:04.325 [2024-07-15 21:20:31.369564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.325 [2024-07-15 21:20:31.369627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.325 [2024-07-15 21:20:31.369642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.325 [2024-07-15 21:20:31.369649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.325 [2024-07-15 21:20:31.369656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.326 [2024-07-15 21:20:31.369669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.326 qpair failed and we were unable to recover it. 00:30:04.326 [2024-07-15 21:20:31.379608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.326 [2024-07-15 21:20:31.379704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.326 [2024-07-15 21:20:31.379721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.326 [2024-07-15 21:20:31.379729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.326 [2024-07-15 21:20:31.379738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.326 [2024-07-15 21:20:31.379753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.326 qpair failed and we were unable to recover it. 00:30:04.326 [2024-07-15 21:20:31.389597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.326 [2024-07-15 21:20:31.389700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.326 [2024-07-15 21:20:31.389715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.326 [2024-07-15 21:20:31.389722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.326 [2024-07-15 21:20:31.389728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.326 [2024-07-15 21:20:31.389742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.326 qpair failed and we were unable to recover it. 00:30:04.326 [2024-07-15 21:20:31.399620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.326 [2024-07-15 21:20:31.399687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.326 [2024-07-15 21:20:31.399702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.326 [2024-07-15 21:20:31.399709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.326 [2024-07-15 21:20:31.399715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.326 [2024-07-15 21:20:31.399728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.326 qpair failed and we were unable to recover it. 00:30:04.326 [2024-07-15 21:20:31.409655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.326 [2024-07-15 21:20:31.409751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.326 [2024-07-15 21:20:31.409766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.326 [2024-07-15 21:20:31.409773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.326 [2024-07-15 21:20:31.409779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.326 [2024-07-15 21:20:31.409793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.326 qpair failed and we were unable to recover it. 00:30:04.326 [2024-07-15 21:20:31.419721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.326 [2024-07-15 21:20:31.419823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.326 [2024-07-15 21:20:31.419838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.326 [2024-07-15 21:20:31.419845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.326 [2024-07-15 21:20:31.419851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.326 [2024-07-15 21:20:31.419864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.326 qpair failed and we were unable to recover it. 00:30:04.326 [2024-07-15 21:20:31.429674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.326 [2024-07-15 21:20:31.429749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.326 [2024-07-15 21:20:31.429764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.326 [2024-07-15 21:20:31.429771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.326 [2024-07-15 21:20:31.429777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.326 [2024-07-15 21:20:31.429791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.326 qpair failed and we were unable to recover it. 00:30:04.326 [2024-07-15 21:20:31.439734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.326 [2024-07-15 21:20:31.439796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.326 [2024-07-15 21:20:31.439811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.326 [2024-07-15 21:20:31.439818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.326 [2024-07-15 21:20:31.439824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.326 [2024-07-15 21:20:31.439837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.326 qpair failed and we were unable to recover it. 00:30:04.326 [2024-07-15 21:20:31.449824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.326 [2024-07-15 21:20:31.449893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.326 [2024-07-15 21:20:31.449908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.326 [2024-07-15 21:20:31.449915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.326 [2024-07-15 21:20:31.449921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.326 [2024-07-15 21:20:31.449935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.326 qpair failed and we were unable to recover it. 00:30:04.326 [2024-07-15 21:20:31.459788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.326 [2024-07-15 21:20:31.459856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.326 [2024-07-15 21:20:31.459871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.326 [2024-07-15 21:20:31.459878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.326 [2024-07-15 21:20:31.459884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.326 [2024-07-15 21:20:31.459897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.326 qpair failed and we were unable to recover it. 00:30:04.326 [2024-07-15 21:20:31.469794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.326 [2024-07-15 21:20:31.469918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.326 [2024-07-15 21:20:31.469934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.326 [2024-07-15 21:20:31.469941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.326 [2024-07-15 21:20:31.469950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.326 [2024-07-15 21:20:31.469964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.326 qpair failed and we were unable to recover it. 00:30:04.326 [2024-07-15 21:20:31.479754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.326 [2024-07-15 21:20:31.479818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.326 [2024-07-15 21:20:31.479836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.326 [2024-07-15 21:20:31.479843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.326 [2024-07-15 21:20:31.479849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.326 [2024-07-15 21:20:31.479864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.326 qpair failed and we were unable to recover it. 00:30:04.326 [2024-07-15 21:20:31.489877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.326 [2024-07-15 21:20:31.489936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.326 [2024-07-15 21:20:31.489951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.326 [2024-07-15 21:20:31.489958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.326 [2024-07-15 21:20:31.489964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.326 [2024-07-15 21:20:31.489978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.326 qpair failed and we were unable to recover it. 00:30:04.326 [2024-07-15 21:20:31.499992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.326 [2024-07-15 21:20:31.500104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.326 [2024-07-15 21:20:31.500120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.326 [2024-07-15 21:20:31.500127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.326 [2024-07-15 21:20:31.500133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.326 [2024-07-15 21:20:31.500146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.326 qpair failed and we were unable to recover it. 00:30:04.326 [2024-07-15 21:20:31.509950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.326 [2024-07-15 21:20:31.510024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.326 [2024-07-15 21:20:31.510049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.326 [2024-07-15 21:20:31.510057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.326 [2024-07-15 21:20:31.510064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.326 [2024-07-15 21:20:31.510082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.326 qpair failed and we were unable to recover it. 00:30:04.326 [2024-07-15 21:20:31.519920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.327 [2024-07-15 21:20:31.519997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.327 [2024-07-15 21:20:31.520022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.327 [2024-07-15 21:20:31.520030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.327 [2024-07-15 21:20:31.520037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.327 [2024-07-15 21:20:31.520056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.327 qpair failed and we were unable to recover it. 00:30:04.327 [2024-07-15 21:20:31.529946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.327 [2024-07-15 21:20:31.530025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.327 [2024-07-15 21:20:31.530050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.327 [2024-07-15 21:20:31.530059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.327 [2024-07-15 21:20:31.530065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.327 [2024-07-15 21:20:31.530084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.327 qpair failed and we were unable to recover it. 00:30:04.327 [2024-07-15 21:20:31.540016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.327 [2024-07-15 21:20:31.540090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.327 [2024-07-15 21:20:31.540115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.327 [2024-07-15 21:20:31.540123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.327 [2024-07-15 21:20:31.540129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.327 [2024-07-15 21:20:31.540148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.327 qpair failed and we were unable to recover it. 00:30:04.327 [2024-07-15 21:20:31.550130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.327 [2024-07-15 21:20:31.550206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.327 [2024-07-15 21:20:31.550223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.327 [2024-07-15 21:20:31.550233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.327 [2024-07-15 21:20:31.550240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.327 [2024-07-15 21:20:31.550255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.327 qpair failed and we were unable to recover it. 00:30:04.327 [2024-07-15 21:20:31.560087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.327 [2024-07-15 21:20:31.560174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.327 [2024-07-15 21:20:31.560190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.327 [2024-07-15 21:20:31.560197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.327 [2024-07-15 21:20:31.560208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.327 [2024-07-15 21:20:31.560222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.327 qpair failed and we were unable to recover it. 00:30:04.327 [2024-07-15 21:20:31.570145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.327 [2024-07-15 21:20:31.570207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.327 [2024-07-15 21:20:31.570222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.327 [2024-07-15 21:20:31.570233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.327 [2024-07-15 21:20:31.570239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.327 [2024-07-15 21:20:31.570253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.327 qpair failed and we were unable to recover it. 00:30:04.327 [2024-07-15 21:20:31.580137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.327 [2024-07-15 21:20:31.580207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.327 [2024-07-15 21:20:31.580223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.327 [2024-07-15 21:20:31.580234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.327 [2024-07-15 21:20:31.580241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.327 [2024-07-15 21:20:31.580255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.327 qpair failed and we were unable to recover it. 00:30:04.327 [2024-07-15 21:20:31.590154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.327 [2024-07-15 21:20:31.590236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.327 [2024-07-15 21:20:31.590256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.327 [2024-07-15 21:20:31.590263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.327 [2024-07-15 21:20:31.590269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.327 [2024-07-15 21:20:31.590283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.327 qpair failed and we were unable to recover it. 00:30:04.327 [2024-07-15 21:20:31.600082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.327 [2024-07-15 21:20:31.600154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.327 [2024-07-15 21:20:31.600171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.327 [2024-07-15 21:20:31.600179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.327 [2024-07-15 21:20:31.600185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.327 [2024-07-15 21:20:31.600200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.327 qpair failed and we were unable to recover it. 00:30:04.327 [2024-07-15 21:20:31.610215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.327 [2024-07-15 21:20:31.610281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.327 [2024-07-15 21:20:31.610297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.327 [2024-07-15 21:20:31.610304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.327 [2024-07-15 21:20:31.610310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.327 [2024-07-15 21:20:31.610324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.327 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-15 21:20:31.620249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.591 [2024-07-15 21:20:31.620356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.591 [2024-07-15 21:20:31.620371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.591 [2024-07-15 21:20:31.620379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.591 [2024-07-15 21:20:31.620385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.591 [2024-07-15 21:20:31.620399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-15 21:20:31.630260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.591 [2024-07-15 21:20:31.630336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.591 [2024-07-15 21:20:31.630353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.591 [2024-07-15 21:20:31.630360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.591 [2024-07-15 21:20:31.630366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.591 [2024-07-15 21:20:31.630381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-15 21:20:31.640297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.591 [2024-07-15 21:20:31.640362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.591 [2024-07-15 21:20:31.640378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.591 [2024-07-15 21:20:31.640385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.591 [2024-07-15 21:20:31.640391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.591 [2024-07-15 21:20:31.640405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-15 21:20:31.650343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.591 [2024-07-15 21:20:31.650409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.591 [2024-07-15 21:20:31.650424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.591 [2024-07-15 21:20:31.650435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.591 [2024-07-15 21:20:31.650441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.591 [2024-07-15 21:20:31.650455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-15 21:20:31.660389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.591 [2024-07-15 21:20:31.660466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.591 [2024-07-15 21:20:31.660482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.591 [2024-07-15 21:20:31.660492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.591 [2024-07-15 21:20:31.660499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.591 [2024-07-15 21:20:31.660513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-15 21:20:31.670375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.591 [2024-07-15 21:20:31.670440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.591 [2024-07-15 21:20:31.670456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.591 [2024-07-15 21:20:31.670463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.591 [2024-07-15 21:20:31.670469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.591 [2024-07-15 21:20:31.670483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-15 21:20:31.680422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.591 [2024-07-15 21:20:31.680485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.591 [2024-07-15 21:20:31.680501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.591 [2024-07-15 21:20:31.680507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.591 [2024-07-15 21:20:31.680513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.591 [2024-07-15 21:20:31.680527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-15 21:20:31.690450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.591 [2024-07-15 21:20:31.690510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.591 [2024-07-15 21:20:31.690525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.591 [2024-07-15 21:20:31.690531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.591 [2024-07-15 21:20:31.690537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.591 [2024-07-15 21:20:31.690551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-15 21:20:31.700646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.591 [2024-07-15 21:20:31.700710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.591 [2024-07-15 21:20:31.700726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.591 [2024-07-15 21:20:31.700732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.591 [2024-07-15 21:20:31.700738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.591 [2024-07-15 21:20:31.700752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-15 21:20:31.710503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.591 [2024-07-15 21:20:31.710578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.591 [2024-07-15 21:20:31.710592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.591 [2024-07-15 21:20:31.710599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.591 [2024-07-15 21:20:31.710605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.591 [2024-07-15 21:20:31.710619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-15 21:20:31.720538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.591 [2024-07-15 21:20:31.720639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.591 [2024-07-15 21:20:31.720655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.591 [2024-07-15 21:20:31.720662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.591 [2024-07-15 21:20:31.720668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.591 [2024-07-15 21:20:31.720682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-15 21:20:31.730545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.591 [2024-07-15 21:20:31.730604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.591 [2024-07-15 21:20:31.730619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.591 [2024-07-15 21:20:31.730626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.591 [2024-07-15 21:20:31.730632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.591 [2024-07-15 21:20:31.730645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-15 21:20:31.740601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.591 [2024-07-15 21:20:31.740666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.591 [2024-07-15 21:20:31.740685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.591 [2024-07-15 21:20:31.740695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.591 [2024-07-15 21:20:31.740702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.591 [2024-07-15 21:20:31.740716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-15 21:20:31.750626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.591 [2024-07-15 21:20:31.750697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.591 [2024-07-15 21:20:31.750712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.591 [2024-07-15 21:20:31.750719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.591 [2024-07-15 21:20:31.750725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.591 [2024-07-15 21:20:31.750739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-15 21:20:31.760651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.592 [2024-07-15 21:20:31.760708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.592 [2024-07-15 21:20:31.760724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.592 [2024-07-15 21:20:31.760731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.592 [2024-07-15 21:20:31.760737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.592 [2024-07-15 21:20:31.760752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-15 21:20:31.770681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.592 [2024-07-15 21:20:31.770740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.592 [2024-07-15 21:20:31.770756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.592 [2024-07-15 21:20:31.770763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.592 [2024-07-15 21:20:31.770769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.592 [2024-07-15 21:20:31.770783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-15 21:20:31.780741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.592 [2024-07-15 21:20:31.780805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.592 [2024-07-15 21:20:31.780820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.592 [2024-07-15 21:20:31.780827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.592 [2024-07-15 21:20:31.780833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.592 [2024-07-15 21:20:31.780847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-15 21:20:31.790650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.592 [2024-07-15 21:20:31.790751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.592 [2024-07-15 21:20:31.790766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.592 [2024-07-15 21:20:31.790773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.592 [2024-07-15 21:20:31.790780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.592 [2024-07-15 21:20:31.790794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-15 21:20:31.800741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.592 [2024-07-15 21:20:31.800805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.592 [2024-07-15 21:20:31.800821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.592 [2024-07-15 21:20:31.800828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.592 [2024-07-15 21:20:31.800834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.592 [2024-07-15 21:20:31.800847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-15 21:20:31.810806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.592 [2024-07-15 21:20:31.810869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.592 [2024-07-15 21:20:31.810884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.592 [2024-07-15 21:20:31.810891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.592 [2024-07-15 21:20:31.810897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.592 [2024-07-15 21:20:31.810911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-15 21:20:31.820809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.592 [2024-07-15 21:20:31.820877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.592 [2024-07-15 21:20:31.820892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.592 [2024-07-15 21:20:31.820899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.592 [2024-07-15 21:20:31.820905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.592 [2024-07-15 21:20:31.820918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-15 21:20:31.830853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.592 [2024-07-15 21:20:31.830922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.592 [2024-07-15 21:20:31.830941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.592 [2024-07-15 21:20:31.830950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.592 [2024-07-15 21:20:31.830956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.592 [2024-07-15 21:20:31.830970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-15 21:20:31.840879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.592 [2024-07-15 21:20:31.840985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.592 [2024-07-15 21:20:31.841001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.592 [2024-07-15 21:20:31.841007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.592 [2024-07-15 21:20:31.841013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.592 [2024-07-15 21:20:31.841027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-15 21:20:31.850987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.592 [2024-07-15 21:20:31.851054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.592 [2024-07-15 21:20:31.851069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.592 [2024-07-15 21:20:31.851076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.592 [2024-07-15 21:20:31.851082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.592 [2024-07-15 21:20:31.851096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-15 21:20:31.860947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.592 [2024-07-15 21:20:31.861013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.592 [2024-07-15 21:20:31.861029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.592 [2024-07-15 21:20:31.861035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.592 [2024-07-15 21:20:31.861041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.592 [2024-07-15 21:20:31.861055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-15 21:20:31.870955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.592 [2024-07-15 21:20:31.871023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.592 [2024-07-15 21:20:31.871038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.592 [2024-07-15 21:20:31.871045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.592 [2024-07-15 21:20:31.871051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.592 [2024-07-15 21:20:31.871065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.855 [2024-07-15 21:20:31.880989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.855 [2024-07-15 21:20:31.881055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.855 [2024-07-15 21:20:31.881073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.855 [2024-07-15 21:20:31.881080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.855 [2024-07-15 21:20:31.881086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.855 [2024-07-15 21:20:31.881101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-07-15 21:20:31.891016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.855 [2024-07-15 21:20:31.891078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.855 [2024-07-15 21:20:31.891093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.855 [2024-07-15 21:20:31.891100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.855 [2024-07-15 21:20:31.891106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.855 [2024-07-15 21:20:31.891121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-07-15 21:20:31.901050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.855 [2024-07-15 21:20:31.901113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.855 [2024-07-15 21:20:31.901129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.855 [2024-07-15 21:20:31.901136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.855 [2024-07-15 21:20:31.901142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.855 [2024-07-15 21:20:31.901155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-07-15 21:20:31.911077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.855 [2024-07-15 21:20:31.911147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.855 [2024-07-15 21:20:31.911162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.855 [2024-07-15 21:20:31.911169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.855 [2024-07-15 21:20:31.911175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.855 [2024-07-15 21:20:31.911188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-07-15 21:20:31.921093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.855 [2024-07-15 21:20:31.921157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.855 [2024-07-15 21:20:31.921177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.855 [2024-07-15 21:20:31.921184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.855 [2024-07-15 21:20:31.921190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.855 [2024-07-15 21:20:31.921203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-07-15 21:20:31.931122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.855 [2024-07-15 21:20:31.931186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.855 [2024-07-15 21:20:31.931201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.855 [2024-07-15 21:20:31.931208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.855 [2024-07-15 21:20:31.931214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.855 [2024-07-15 21:20:31.931228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-07-15 21:20:31.941156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.855 [2024-07-15 21:20:31.941226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.855 [2024-07-15 21:20:31.941245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.855 [2024-07-15 21:20:31.941252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.855 [2024-07-15 21:20:31.941258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.855 [2024-07-15 21:20:31.941272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-07-15 21:20:31.951220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.855 [2024-07-15 21:20:31.951323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.855 [2024-07-15 21:20:31.951339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.855 [2024-07-15 21:20:31.951346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.855 [2024-07-15 21:20:31.951352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.855 [2024-07-15 21:20:31.951366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-07-15 21:20:31.961193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.855 [2024-07-15 21:20:31.961259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.855 [2024-07-15 21:20:31.961275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.855 [2024-07-15 21:20:31.961282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.855 [2024-07-15 21:20:31.961288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.855 [2024-07-15 21:20:31.961306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-07-15 21:20:31.971250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.855 [2024-07-15 21:20:31.971310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.855 [2024-07-15 21:20:31.971326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.855 [2024-07-15 21:20:31.971333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.855 [2024-07-15 21:20:31.971339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.855 [2024-07-15 21:20:31.971353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-07-15 21:20:31.981325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.855 [2024-07-15 21:20:31.981409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.855 [2024-07-15 21:20:31.981425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.855 [2024-07-15 21:20:31.981432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.855 [2024-07-15 21:20:31.981437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.855 [2024-07-15 21:20:31.981451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-07-15 21:20:31.991300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.855 [2024-07-15 21:20:31.991369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.855 [2024-07-15 21:20:31.991384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.855 [2024-07-15 21:20:31.991391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.855 [2024-07-15 21:20:31.991397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.855 [2024-07-15 21:20:31.991411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-07-15 21:20:32.001317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.855 [2024-07-15 21:20:32.001382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.855 [2024-07-15 21:20:32.001397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.855 [2024-07-15 21:20:32.001405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.855 [2024-07-15 21:20:32.001411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.855 [2024-07-15 21:20:32.001425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.856 [2024-07-15 21:20:32.011335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.856 [2024-07-15 21:20:32.011403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.856 [2024-07-15 21:20:32.011422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.856 [2024-07-15 21:20:32.011429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.856 [2024-07-15 21:20:32.011435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.856 [2024-07-15 21:20:32.011449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-07-15 21:20:32.021370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.856 [2024-07-15 21:20:32.021436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.856 [2024-07-15 21:20:32.021452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.856 [2024-07-15 21:20:32.021459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.856 [2024-07-15 21:20:32.021465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.856 [2024-07-15 21:20:32.021479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-07-15 21:20:32.031478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.856 [2024-07-15 21:20:32.031547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.856 [2024-07-15 21:20:32.031562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.856 [2024-07-15 21:20:32.031569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.856 [2024-07-15 21:20:32.031575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.856 [2024-07-15 21:20:32.031589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-07-15 21:20:32.041432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.856 [2024-07-15 21:20:32.041496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.856 [2024-07-15 21:20:32.041511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.856 [2024-07-15 21:20:32.041518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.856 [2024-07-15 21:20:32.041524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.856 [2024-07-15 21:20:32.041537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-07-15 21:20:32.051357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.856 [2024-07-15 21:20:32.051421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.856 [2024-07-15 21:20:32.051436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.856 [2024-07-15 21:20:32.051443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.856 [2024-07-15 21:20:32.051449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.856 [2024-07-15 21:20:32.051466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-07-15 21:20:32.061526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.856 [2024-07-15 21:20:32.061587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.856 [2024-07-15 21:20:32.061603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.856 [2024-07-15 21:20:32.061609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.856 [2024-07-15 21:20:32.061615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.856 [2024-07-15 21:20:32.061629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-07-15 21:20:32.071547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.856 [2024-07-15 21:20:32.071628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.856 [2024-07-15 21:20:32.071643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.856 [2024-07-15 21:20:32.071650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.856 [2024-07-15 21:20:32.071656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.856 [2024-07-15 21:20:32.071670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-07-15 21:20:32.081577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.856 [2024-07-15 21:20:32.081643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.856 [2024-07-15 21:20:32.081659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.856 [2024-07-15 21:20:32.081666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.856 [2024-07-15 21:20:32.081672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.856 [2024-07-15 21:20:32.081685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-07-15 21:20:32.091589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.856 [2024-07-15 21:20:32.091652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.856 [2024-07-15 21:20:32.091667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.856 [2024-07-15 21:20:32.091673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.856 [2024-07-15 21:20:32.091679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.856 [2024-07-15 21:20:32.091693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-07-15 21:20:32.101618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.856 [2024-07-15 21:20:32.101684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.856 [2024-07-15 21:20:32.101707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.856 [2024-07-15 21:20:32.101714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.856 [2024-07-15 21:20:32.101720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.856 [2024-07-15 21:20:32.101733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-07-15 21:20:32.111643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.856 [2024-07-15 21:20:32.111712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.856 [2024-07-15 21:20:32.111727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.856 [2024-07-15 21:20:32.111734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.856 [2024-07-15 21:20:32.111740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.856 [2024-07-15 21:20:32.111754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-07-15 21:20:32.121678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.856 [2024-07-15 21:20:32.121738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.856 [2024-07-15 21:20:32.121754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.856 [2024-07-15 21:20:32.121761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.856 [2024-07-15 21:20:32.121767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.856 [2024-07-15 21:20:32.121780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-07-15 21:20:32.131701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.856 [2024-07-15 21:20:32.131758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.856 [2024-07-15 21:20:32.131775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.856 [2024-07-15 21:20:32.131783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.856 [2024-07-15 21:20:32.131789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.856 [2024-07-15 21:20:32.131804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-07-15 21:20:32.141732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.856 [2024-07-15 21:20:32.141799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.856 [2024-07-15 21:20:32.141814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.856 [2024-07-15 21:20:32.141821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.856 [2024-07-15 21:20:32.141831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:04.856 [2024-07-15 21:20:32.141845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.856 qpair failed and we were unable to recover it. 00:30:05.120 [2024-07-15 21:20:32.151742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.120 [2024-07-15 21:20:32.151812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.120 [2024-07-15 21:20:32.151828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.120 [2024-07-15 21:20:32.151835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.120 [2024-07-15 21:20:32.151841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.120 [2024-07-15 21:20:32.151856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.120 qpair failed and we were unable to recover it. 00:30:05.120 [2024-07-15 21:20:32.161798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.120 [2024-07-15 21:20:32.161894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.120 [2024-07-15 21:20:32.161910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.120 [2024-07-15 21:20:32.161916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.120 [2024-07-15 21:20:32.161922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.120 [2024-07-15 21:20:32.161936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.120 qpair failed and we were unable to recover it. 00:30:05.120 [2024-07-15 21:20:32.171749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.120 [2024-07-15 21:20:32.171837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.120 [2024-07-15 21:20:32.171853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.120 [2024-07-15 21:20:32.171860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.120 [2024-07-15 21:20:32.171866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.120 [2024-07-15 21:20:32.171881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.120 qpair failed and we were unable to recover it. 00:30:05.120 [2024-07-15 21:20:32.181820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.120 [2024-07-15 21:20:32.181885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.120 [2024-07-15 21:20:32.181901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.120 [2024-07-15 21:20:32.181907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.120 [2024-07-15 21:20:32.181913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.120 [2024-07-15 21:20:32.181927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.120 qpair failed and we were unable to recover it. 00:30:05.120 [2024-07-15 21:20:32.191889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.120 [2024-07-15 21:20:32.191966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.120 [2024-07-15 21:20:32.191992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.120 [2024-07-15 21:20:32.192000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.120 [2024-07-15 21:20:32.192006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.120 [2024-07-15 21:20:32.192025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.120 qpair failed and we were unable to recover it. 00:30:05.120 [2024-07-15 21:20:32.201878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.120 [2024-07-15 21:20:32.201945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.120 [2024-07-15 21:20:32.201970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.120 [2024-07-15 21:20:32.201979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.120 [2024-07-15 21:20:32.201986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.120 [2024-07-15 21:20:32.202004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.120 qpair failed and we were unable to recover it. 00:30:05.120 [2024-07-15 21:20:32.211906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.120 [2024-07-15 21:20:32.211976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.120 [2024-07-15 21:20:32.212000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.120 [2024-07-15 21:20:32.212008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.120 [2024-07-15 21:20:32.212015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.120 [2024-07-15 21:20:32.212034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.120 qpair failed and we were unable to recover it. 00:30:05.120 [2024-07-15 21:20:32.221941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.120 [2024-07-15 21:20:32.222014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.120 [2024-07-15 21:20:32.222039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.120 [2024-07-15 21:20:32.222048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.120 [2024-07-15 21:20:32.222054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.120 [2024-07-15 21:20:32.222072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.120 qpair failed and we were unable to recover it. 00:30:05.120 [2024-07-15 21:20:32.231986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.120 [2024-07-15 21:20:32.232097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.120 [2024-07-15 21:20:32.232114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.120 [2024-07-15 21:20:32.232122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.120 [2024-07-15 21:20:32.232132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.120 [2024-07-15 21:20:32.232148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.120 qpair failed and we were unable to recover it. 00:30:05.120 [2024-07-15 21:20:32.241886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.120 [2024-07-15 21:20:32.241945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.120 [2024-07-15 21:20:32.241962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.120 [2024-07-15 21:20:32.241969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.120 [2024-07-15 21:20:32.241975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.120 [2024-07-15 21:20:32.241988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.120 qpair failed and we were unable to recover it. 00:30:05.120 [2024-07-15 21:20:32.252078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.120 [2024-07-15 21:20:32.252137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.121 [2024-07-15 21:20:32.252153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.121 [2024-07-15 21:20:32.252160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.121 [2024-07-15 21:20:32.252166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.121 [2024-07-15 21:20:32.252180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.121 qpair failed and we were unable to recover it. 00:30:05.121 [2024-07-15 21:20:32.262058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.121 [2024-07-15 21:20:32.262121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.121 [2024-07-15 21:20:32.262136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.121 [2024-07-15 21:20:32.262143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.121 [2024-07-15 21:20:32.262149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.121 [2024-07-15 21:20:32.262164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.121 qpair failed and we were unable to recover it. 00:30:05.121 [2024-07-15 21:20:32.272104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.121 [2024-07-15 21:20:32.272178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.121 [2024-07-15 21:20:32.272194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.121 [2024-07-15 21:20:32.272201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.121 [2024-07-15 21:20:32.272207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.121 [2024-07-15 21:20:32.272221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.121 qpair failed and we were unable to recover it. 00:30:05.121 [2024-07-15 21:20:32.282127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.121 [2024-07-15 21:20:32.282192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.121 [2024-07-15 21:20:32.282208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.121 [2024-07-15 21:20:32.282215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.121 [2024-07-15 21:20:32.282221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.121 [2024-07-15 21:20:32.282238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.121 qpair failed and we were unable to recover it. 00:30:05.121 [2024-07-15 21:20:32.292030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.121 [2024-07-15 21:20:32.292097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.121 [2024-07-15 21:20:32.292112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.121 [2024-07-15 21:20:32.292119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.121 [2024-07-15 21:20:32.292125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.121 [2024-07-15 21:20:32.292139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.121 qpair failed and we were unable to recover it. 00:30:05.121 [2024-07-15 21:20:32.302208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.121 [2024-07-15 21:20:32.302277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.121 [2024-07-15 21:20:32.302293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.121 [2024-07-15 21:20:32.302300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.121 [2024-07-15 21:20:32.302306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.121 [2024-07-15 21:20:32.302320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.121 qpair failed and we were unable to recover it. 00:30:05.121 [2024-07-15 21:20:32.312189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.121 [2024-07-15 21:20:32.312262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.121 [2024-07-15 21:20:32.312277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.121 [2024-07-15 21:20:32.312284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.121 [2024-07-15 21:20:32.312290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.121 [2024-07-15 21:20:32.312304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.121 qpair failed and we were unable to recover it. 00:30:05.121 [2024-07-15 21:20:32.322218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.121 [2024-07-15 21:20:32.322282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.121 [2024-07-15 21:20:32.322297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.121 [2024-07-15 21:20:32.322304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.121 [2024-07-15 21:20:32.322314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.121 [2024-07-15 21:20:32.322327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.121 qpair failed and we were unable to recover it. 00:30:05.121 [2024-07-15 21:20:32.332243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.121 [2024-07-15 21:20:32.332308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.121 [2024-07-15 21:20:32.332323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.121 [2024-07-15 21:20:32.332330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.121 [2024-07-15 21:20:32.332336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.121 [2024-07-15 21:20:32.332350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.121 qpair failed and we were unable to recover it. 00:30:05.121 [2024-07-15 21:20:32.342289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.121 [2024-07-15 21:20:32.342369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.121 [2024-07-15 21:20:32.342385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.121 [2024-07-15 21:20:32.342392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.121 [2024-07-15 21:20:32.342398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.121 [2024-07-15 21:20:32.342412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.121 qpair failed and we were unable to recover it. 00:30:05.121 [2024-07-15 21:20:32.352289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.121 [2024-07-15 21:20:32.352354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.121 [2024-07-15 21:20:32.352370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.121 [2024-07-15 21:20:32.352376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.121 [2024-07-15 21:20:32.352383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.121 [2024-07-15 21:20:32.352396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.121 qpair failed and we were unable to recover it. 00:30:05.121 [2024-07-15 21:20:32.362323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.121 [2024-07-15 21:20:32.362412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.121 [2024-07-15 21:20:32.362427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.121 [2024-07-15 21:20:32.362435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.121 [2024-07-15 21:20:32.362441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.121 [2024-07-15 21:20:32.362454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.121 qpair failed and we were unable to recover it. 00:30:05.121 [2024-07-15 21:20:32.372368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.121 [2024-07-15 21:20:32.372436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.121 [2024-07-15 21:20:32.372452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.121 [2024-07-15 21:20:32.372459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.121 [2024-07-15 21:20:32.372465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.121 [2024-07-15 21:20:32.372479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.121 qpair failed and we were unable to recover it. 00:30:05.121 [2024-07-15 21:20:32.382296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.121 [2024-07-15 21:20:32.382363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.121 [2024-07-15 21:20:32.382380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.121 [2024-07-15 21:20:32.382387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.121 [2024-07-15 21:20:32.382393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.121 [2024-07-15 21:20:32.382408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.121 qpair failed and we were unable to recover it. 00:30:05.121 [2024-07-15 21:20:32.392367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.121 [2024-07-15 21:20:32.392439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.121 [2024-07-15 21:20:32.392455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.121 [2024-07-15 21:20:32.392462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.121 [2024-07-15 21:20:32.392468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.122 [2024-07-15 21:20:32.392483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.122 qpair failed and we were unable to recover it. 00:30:05.122 [2024-07-15 21:20:32.402472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.122 [2024-07-15 21:20:32.402533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.122 [2024-07-15 21:20:32.402548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.122 [2024-07-15 21:20:32.402555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.122 [2024-07-15 21:20:32.402562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.122 [2024-07-15 21:20:32.402575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.122 qpair failed and we were unable to recover it. 00:30:05.385 [2024-07-15 21:20:32.412487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.385 [2024-07-15 21:20:32.412551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.385 [2024-07-15 21:20:32.412566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.385 [2024-07-15 21:20:32.412577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.385 [2024-07-15 21:20:32.412583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.385 [2024-07-15 21:20:32.412597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.385 qpair failed and we were unable to recover it. 00:30:05.385 [2024-07-15 21:20:32.422456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.385 [2024-07-15 21:20:32.422541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.385 [2024-07-15 21:20:32.422556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.385 [2024-07-15 21:20:32.422563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.385 [2024-07-15 21:20:32.422569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.385 [2024-07-15 21:20:32.422583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.385 qpair failed and we were unable to recover it. 00:30:05.385 [2024-07-15 21:20:32.432566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.385 [2024-07-15 21:20:32.432637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.385 [2024-07-15 21:20:32.432652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.385 [2024-07-15 21:20:32.432659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.385 [2024-07-15 21:20:32.432665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.385 [2024-07-15 21:20:32.432679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.385 qpair failed and we were unable to recover it. 00:30:05.385 [2024-07-15 21:20:32.442536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.385 [2024-07-15 21:20:32.442648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.385 [2024-07-15 21:20:32.442663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.385 [2024-07-15 21:20:32.442669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.385 [2024-07-15 21:20:32.442675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.385 [2024-07-15 21:20:32.442689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.385 qpair failed and we were unable to recover it. 00:30:05.385 [2024-07-15 21:20:32.452599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.385 [2024-07-15 21:20:32.452659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.385 [2024-07-15 21:20:32.452675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.385 [2024-07-15 21:20:32.452681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.385 [2024-07-15 21:20:32.452687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.385 [2024-07-15 21:20:32.452701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.385 qpair failed and we were unable to recover it. 00:30:05.385 [2024-07-15 21:20:32.462627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.385 [2024-07-15 21:20:32.462687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.385 [2024-07-15 21:20:32.462703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.385 [2024-07-15 21:20:32.462710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.385 [2024-07-15 21:20:32.462715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.385 [2024-07-15 21:20:32.462729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.385 qpair failed and we were unable to recover it. 00:30:05.385 [2024-07-15 21:20:32.472600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.385 [2024-07-15 21:20:32.472672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.385 [2024-07-15 21:20:32.472687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.385 [2024-07-15 21:20:32.472694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.385 [2024-07-15 21:20:32.472700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.385 [2024-07-15 21:20:32.472714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.385 qpair failed and we were unable to recover it. 00:30:05.385 [2024-07-15 21:20:32.482628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.385 [2024-07-15 21:20:32.482693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.385 [2024-07-15 21:20:32.482718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.385 [2024-07-15 21:20:32.482727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.385 [2024-07-15 21:20:32.482733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.385 [2024-07-15 21:20:32.482746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.385 qpair failed and we were unable to recover it. 00:30:05.385 [2024-07-15 21:20:32.492725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.385 [2024-07-15 21:20:32.492785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.385 [2024-07-15 21:20:32.492800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.385 [2024-07-15 21:20:32.492807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.386 [2024-07-15 21:20:32.492813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.386 [2024-07-15 21:20:32.492827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.386 qpair failed and we were unable to recover it. 00:30:05.386 [2024-07-15 21:20:32.502746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.386 [2024-07-15 21:20:32.502810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.386 [2024-07-15 21:20:32.502825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.386 [2024-07-15 21:20:32.502836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.386 [2024-07-15 21:20:32.502842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.386 [2024-07-15 21:20:32.502855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.386 qpair failed and we were unable to recover it. 00:30:05.386 [2024-07-15 21:20:32.512760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.386 [2024-07-15 21:20:32.512824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.386 [2024-07-15 21:20:32.512839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.386 [2024-07-15 21:20:32.512846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.386 [2024-07-15 21:20:32.512852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.386 [2024-07-15 21:20:32.512866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.386 qpair failed and we were unable to recover it. 00:30:05.386 [2024-07-15 21:20:32.522824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.386 [2024-07-15 21:20:32.522885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.386 [2024-07-15 21:20:32.522901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.386 [2024-07-15 21:20:32.522908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.386 [2024-07-15 21:20:32.522914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.386 [2024-07-15 21:20:32.522927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.386 qpair failed and we were unable to recover it. 00:30:05.386 [2024-07-15 21:20:32.532814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.386 [2024-07-15 21:20:32.532877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.386 [2024-07-15 21:20:32.532893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.386 [2024-07-15 21:20:32.532900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.386 [2024-07-15 21:20:32.532906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.386 [2024-07-15 21:20:32.532919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.386 qpair failed and we were unable to recover it. 00:30:05.386 [2024-07-15 21:20:32.542837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.386 [2024-07-15 21:20:32.542902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.386 [2024-07-15 21:20:32.542918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.386 [2024-07-15 21:20:32.542925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.386 [2024-07-15 21:20:32.542931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.386 [2024-07-15 21:20:32.542945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.386 qpair failed and we were unable to recover it. 00:30:05.386 [2024-07-15 21:20:32.552861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.386 [2024-07-15 21:20:32.552926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.386 [2024-07-15 21:20:32.552942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.386 [2024-07-15 21:20:32.552949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.386 [2024-07-15 21:20:32.552955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.386 [2024-07-15 21:20:32.552968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.386 qpair failed and we were unable to recover it. 00:30:05.386 [2024-07-15 21:20:32.562903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.386 [2024-07-15 21:20:32.562978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.386 [2024-07-15 21:20:32.563003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.386 [2024-07-15 21:20:32.563012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.386 [2024-07-15 21:20:32.563018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.386 [2024-07-15 21:20:32.563037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.386 qpair failed and we were unable to recover it. 00:30:05.386 [2024-07-15 21:20:32.572937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.386 [2024-07-15 21:20:32.573003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.386 [2024-07-15 21:20:32.573028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.386 [2024-07-15 21:20:32.573036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.386 [2024-07-15 21:20:32.573043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.386 [2024-07-15 21:20:32.573061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.386 qpair failed and we were unable to recover it. 00:30:05.386 [2024-07-15 21:20:32.582956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.386 [2024-07-15 21:20:32.583022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.386 [2024-07-15 21:20:32.583038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.386 [2024-07-15 21:20:32.583045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.386 [2024-07-15 21:20:32.583052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.386 [2024-07-15 21:20:32.583066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.386 qpair failed and we were unable to recover it. 00:30:05.386 [2024-07-15 21:20:32.592960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.386 [2024-07-15 21:20:32.593030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.386 [2024-07-15 21:20:32.593045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.386 [2024-07-15 21:20:32.593057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.386 [2024-07-15 21:20:32.593063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.386 [2024-07-15 21:20:32.593077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.386 qpair failed and we were unable to recover it. 00:30:05.386 [2024-07-15 21:20:32.603001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.386 [2024-07-15 21:20:32.603079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.386 [2024-07-15 21:20:32.603095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.386 [2024-07-15 21:20:32.603102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.386 [2024-07-15 21:20:32.603108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.386 [2024-07-15 21:20:32.603121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.386 qpair failed and we were unable to recover it. 00:30:05.386 [2024-07-15 21:20:32.613036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.386 [2024-07-15 21:20:32.613096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.386 [2024-07-15 21:20:32.613111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.386 [2024-07-15 21:20:32.613118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.386 [2024-07-15 21:20:32.613124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.387 [2024-07-15 21:20:32.613138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.387 qpair failed and we were unable to recover it. 00:30:05.387 [2024-07-15 21:20:32.623055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.387 [2024-07-15 21:20:32.623124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.387 [2024-07-15 21:20:32.623139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.387 [2024-07-15 21:20:32.623146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.387 [2024-07-15 21:20:32.623152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.387 [2024-07-15 21:20:32.623165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.387 qpair failed and we were unable to recover it. 00:30:05.387 [2024-07-15 21:20:32.633373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.387 [2024-07-15 21:20:32.633453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.387 [2024-07-15 21:20:32.633470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.387 [2024-07-15 21:20:32.633477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.387 [2024-07-15 21:20:32.633483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.387 [2024-07-15 21:20:32.633498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.387 qpair failed and we were unable to recover it. 00:30:05.387 [2024-07-15 21:20:32.643047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.387 [2024-07-15 21:20:32.643113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.387 [2024-07-15 21:20:32.643129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.387 [2024-07-15 21:20:32.643135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.387 [2024-07-15 21:20:32.643142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.387 [2024-07-15 21:20:32.643156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.387 qpair failed and we were unable to recover it. 00:30:05.387 [2024-07-15 21:20:32.653214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.387 [2024-07-15 21:20:32.653347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.387 [2024-07-15 21:20:32.653363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.387 [2024-07-15 21:20:32.653370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.387 [2024-07-15 21:20:32.653376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.387 [2024-07-15 21:20:32.653390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.387 qpair failed and we were unable to recover it. 00:30:05.387 [2024-07-15 21:20:32.663237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.387 [2024-07-15 21:20:32.663305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.387 [2024-07-15 21:20:32.663320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.387 [2024-07-15 21:20:32.663326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.387 [2024-07-15 21:20:32.663333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.387 [2024-07-15 21:20:32.663346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.387 qpair failed and we were unable to recover it. 00:30:05.387 [2024-07-15 21:20:32.673191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.387 [2024-07-15 21:20:32.673312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.387 [2024-07-15 21:20:32.673328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.387 [2024-07-15 21:20:32.673335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.387 [2024-07-15 21:20:32.673341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.387 [2024-07-15 21:20:32.673355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.387 qpair failed and we were unable to recover it. 00:30:05.650 [2024-07-15 21:20:32.683228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.650 [2024-07-15 21:20:32.683299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.650 [2024-07-15 21:20:32.683317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.650 [2024-07-15 21:20:32.683324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.650 [2024-07-15 21:20:32.683330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.650 [2024-07-15 21:20:32.683344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.650 qpair failed and we were unable to recover it. 00:30:05.650 [2024-07-15 21:20:32.693251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.650 [2024-07-15 21:20:32.693316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.650 [2024-07-15 21:20:32.693331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.650 [2024-07-15 21:20:32.693338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.650 [2024-07-15 21:20:32.693344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.650 [2024-07-15 21:20:32.693358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.650 qpair failed and we were unable to recover it. 00:30:05.650 [2024-07-15 21:20:32.703285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.650 [2024-07-15 21:20:32.703351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.650 [2024-07-15 21:20:32.703366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.650 [2024-07-15 21:20:32.703373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.650 [2024-07-15 21:20:32.703379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.650 [2024-07-15 21:20:32.703393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.650 qpair failed and we were unable to recover it. 00:30:05.650 [2024-07-15 21:20:32.713329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.650 [2024-07-15 21:20:32.713402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.650 [2024-07-15 21:20:32.713418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.650 [2024-07-15 21:20:32.713424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.650 [2024-07-15 21:20:32.713431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.650 [2024-07-15 21:20:32.713444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.650 qpair failed and we were unable to recover it. 00:30:05.650 [2024-07-15 21:20:32.723329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.650 [2024-07-15 21:20:32.723392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.650 [2024-07-15 21:20:32.723407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.650 [2024-07-15 21:20:32.723413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.650 [2024-07-15 21:20:32.723420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.650 [2024-07-15 21:20:32.723436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.650 qpair failed and we were unable to recover it. 00:30:05.650 [2024-07-15 21:20:32.733580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.650 [2024-07-15 21:20:32.733641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.650 [2024-07-15 21:20:32.733657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.650 [2024-07-15 21:20:32.733664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.650 [2024-07-15 21:20:32.733670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.650 [2024-07-15 21:20:32.733683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.650 qpair failed and we were unable to recover it. 00:30:05.651 [2024-07-15 21:20:32.743398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.651 [2024-07-15 21:20:32.743460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.651 [2024-07-15 21:20:32.743475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.651 [2024-07-15 21:20:32.743482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.651 [2024-07-15 21:20:32.743488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.651 [2024-07-15 21:20:32.743502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.651 qpair failed and we were unable to recover it. 00:30:05.651 [2024-07-15 21:20:32.753481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.651 [2024-07-15 21:20:32.753566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.651 [2024-07-15 21:20:32.753581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.651 [2024-07-15 21:20:32.753588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.651 [2024-07-15 21:20:32.753594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.651 [2024-07-15 21:20:32.753607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.651 qpair failed and we were unable to recover it. 00:30:05.651 [2024-07-15 21:20:32.763463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.651 [2024-07-15 21:20:32.763526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.651 [2024-07-15 21:20:32.763541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.651 [2024-07-15 21:20:32.763548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.651 [2024-07-15 21:20:32.763553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.651 [2024-07-15 21:20:32.763567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.651 qpair failed and we were unable to recover it. 00:30:05.651 [2024-07-15 21:20:32.773474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.651 [2024-07-15 21:20:32.773536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.651 [2024-07-15 21:20:32.773554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.651 [2024-07-15 21:20:32.773562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.651 [2024-07-15 21:20:32.773567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.651 [2024-07-15 21:20:32.773581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.651 qpair failed and we were unable to recover it. 00:30:05.651 [2024-07-15 21:20:32.783521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.651 [2024-07-15 21:20:32.783583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.651 [2024-07-15 21:20:32.783599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.651 [2024-07-15 21:20:32.783606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.651 [2024-07-15 21:20:32.783612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.651 [2024-07-15 21:20:32.783625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.651 qpair failed and we were unable to recover it. 00:30:05.651 [2024-07-15 21:20:32.793542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.651 [2024-07-15 21:20:32.793633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.651 [2024-07-15 21:20:32.793647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.651 [2024-07-15 21:20:32.793654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.651 [2024-07-15 21:20:32.793660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.651 [2024-07-15 21:20:32.793673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.651 qpair failed and we were unable to recover it. 00:30:05.651 [2024-07-15 21:20:32.803571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.651 [2024-07-15 21:20:32.803632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.651 [2024-07-15 21:20:32.803648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.651 [2024-07-15 21:20:32.803654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.651 [2024-07-15 21:20:32.803660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.651 [2024-07-15 21:20:32.803674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.651 qpair failed and we were unable to recover it. 00:30:05.651 [2024-07-15 21:20:32.813529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.651 [2024-07-15 21:20:32.813628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.651 [2024-07-15 21:20:32.813643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.651 [2024-07-15 21:20:32.813650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.651 [2024-07-15 21:20:32.813656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.651 [2024-07-15 21:20:32.813673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.651 qpair failed and we were unable to recover it. 00:30:05.651 [2024-07-15 21:20:32.823638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.651 [2024-07-15 21:20:32.823702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.651 [2024-07-15 21:20:32.823717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.651 [2024-07-15 21:20:32.823723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.651 [2024-07-15 21:20:32.823729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.651 [2024-07-15 21:20:32.823743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.651 qpair failed and we were unable to recover it. 00:30:05.651 [2024-07-15 21:20:32.833651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.651 [2024-07-15 21:20:32.833726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.651 [2024-07-15 21:20:32.833742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.651 [2024-07-15 21:20:32.833748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.651 [2024-07-15 21:20:32.833755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.651 [2024-07-15 21:20:32.833768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.651 qpair failed and we were unable to recover it. 00:30:05.651 [2024-07-15 21:20:32.843701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.651 [2024-07-15 21:20:32.843764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.651 [2024-07-15 21:20:32.843779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.651 [2024-07-15 21:20:32.843786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.651 [2024-07-15 21:20:32.843792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.651 [2024-07-15 21:20:32.843806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.651 qpair failed and we were unable to recover it. 00:30:05.651 [2024-07-15 21:20:32.853705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.651 [2024-07-15 21:20:32.853769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.651 [2024-07-15 21:20:32.853784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.651 [2024-07-15 21:20:32.853791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.651 [2024-07-15 21:20:32.853797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.651 [2024-07-15 21:20:32.853810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.651 qpair failed and we were unable to recover it. 00:30:05.651 [2024-07-15 21:20:32.863726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.651 [2024-07-15 21:20:32.863790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.651 [2024-07-15 21:20:32.863812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.651 [2024-07-15 21:20:32.863819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.651 [2024-07-15 21:20:32.863825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.651 [2024-07-15 21:20:32.863839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.651 qpair failed and we were unable to recover it. 00:30:05.651 [2024-07-15 21:20:32.873742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.651 [2024-07-15 21:20:32.873806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.651 [2024-07-15 21:20:32.873822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.651 [2024-07-15 21:20:32.873829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.651 [2024-07-15 21:20:32.873836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.651 [2024-07-15 21:20:32.873849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.651 qpair failed and we were unable to recover it. 00:30:05.651 [2024-07-15 21:20:32.883666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.651 [2024-07-15 21:20:32.883725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.651 [2024-07-15 21:20:32.883741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.652 [2024-07-15 21:20:32.883748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.652 [2024-07-15 21:20:32.883755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.652 [2024-07-15 21:20:32.883770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.652 qpair failed and we were unable to recover it. 00:30:05.652 [2024-07-15 21:20:32.893784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.652 [2024-07-15 21:20:32.893839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.652 [2024-07-15 21:20:32.893856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.652 [2024-07-15 21:20:32.893862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.652 [2024-07-15 21:20:32.893868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.652 [2024-07-15 21:20:32.893882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.652 qpair failed and we were unable to recover it. 00:30:05.652 [2024-07-15 21:20:32.903838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.652 [2024-07-15 21:20:32.903923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.652 [2024-07-15 21:20:32.903938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.652 [2024-07-15 21:20:32.903945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.652 [2024-07-15 21:20:32.903951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.652 [2024-07-15 21:20:32.903968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.652 qpair failed and we were unable to recover it. 00:30:05.652 [2024-07-15 21:20:32.913852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.652 [2024-07-15 21:20:32.913925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.652 [2024-07-15 21:20:32.913950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.652 [2024-07-15 21:20:32.913959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.652 [2024-07-15 21:20:32.913965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.652 [2024-07-15 21:20:32.913984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.652 qpair failed and we were unable to recover it. 00:30:05.652 [2024-07-15 21:20:32.923919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.652 [2024-07-15 21:20:32.923988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.652 [2024-07-15 21:20:32.924013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.652 [2024-07-15 21:20:32.924021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.652 [2024-07-15 21:20:32.924028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.652 [2024-07-15 21:20:32.924046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.652 qpair failed and we were unable to recover it. 00:30:05.652 [2024-07-15 21:20:32.933782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.652 [2024-07-15 21:20:32.933850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.652 [2024-07-15 21:20:32.933875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.652 [2024-07-15 21:20:32.933883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.652 [2024-07-15 21:20:32.933890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.652 [2024-07-15 21:20:32.933908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.652 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-15 21:20:32.943970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.915 [2024-07-15 21:20:32.944081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.915 [2024-07-15 21:20:32.944106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.915 [2024-07-15 21:20:32.944115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.915 [2024-07-15 21:20:32.944122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.915 [2024-07-15 21:20:32.944140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-15 21:20:32.953964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.915 [2024-07-15 21:20:32.954055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.915 [2024-07-15 21:20:32.954076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.915 [2024-07-15 21:20:32.954084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.915 [2024-07-15 21:20:32.954090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.915 [2024-07-15 21:20:32.954105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-15 21:20:32.964003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.915 [2024-07-15 21:20:32.964066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.915 [2024-07-15 21:20:32.964082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.915 [2024-07-15 21:20:32.964089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.915 [2024-07-15 21:20:32.964095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.915 [2024-07-15 21:20:32.964108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-15 21:20:32.974008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.915 [2024-07-15 21:20:32.974070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.915 [2024-07-15 21:20:32.974086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.915 [2024-07-15 21:20:32.974093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.915 [2024-07-15 21:20:32.974099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.915 [2024-07-15 21:20:32.974113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-15 21:20:32.984067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.915 [2024-07-15 21:20:32.984130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.915 [2024-07-15 21:20:32.984145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.915 [2024-07-15 21:20:32.984152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.915 [2024-07-15 21:20:32.984158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.915 [2024-07-15 21:20:32.984172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-15 21:20:32.994101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.915 [2024-07-15 21:20:32.994224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.915 [2024-07-15 21:20:32.994243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.915 [2024-07-15 21:20:32.994250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.915 [2024-07-15 21:20:32.994260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.915 [2024-07-15 21:20:32.994274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-15 21:20:33.004132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.915 [2024-07-15 21:20:33.004193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.915 [2024-07-15 21:20:33.004209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.915 [2024-07-15 21:20:33.004215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.915 [2024-07-15 21:20:33.004221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.915 [2024-07-15 21:20:33.004238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-15 21:20:33.014170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.915 [2024-07-15 21:20:33.014227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.915 [2024-07-15 21:20:33.014248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.915 [2024-07-15 21:20:33.014254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.915 [2024-07-15 21:20:33.014261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.915 [2024-07-15 21:20:33.014274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-15 21:20:33.024083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.916 [2024-07-15 21:20:33.024151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.916 [2024-07-15 21:20:33.024166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.916 [2024-07-15 21:20:33.024173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.916 [2024-07-15 21:20:33.024179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.916 [2024-07-15 21:20:33.024192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-15 21:20:33.034216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.916 [2024-07-15 21:20:33.034290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.916 [2024-07-15 21:20:33.034306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.916 [2024-07-15 21:20:33.034313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.916 [2024-07-15 21:20:33.034319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.916 [2024-07-15 21:20:33.034332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-15 21:20:33.044264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.916 [2024-07-15 21:20:33.044330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.916 [2024-07-15 21:20:33.044345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.916 [2024-07-15 21:20:33.044352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.916 [2024-07-15 21:20:33.044358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.916 [2024-07-15 21:20:33.044372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-15 21:20:33.054131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.916 [2024-07-15 21:20:33.054187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.916 [2024-07-15 21:20:33.054202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.916 [2024-07-15 21:20:33.054208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.916 [2024-07-15 21:20:33.054215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.916 [2024-07-15 21:20:33.054228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-15 21:20:33.064310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.916 [2024-07-15 21:20:33.064402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.916 [2024-07-15 21:20:33.064417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.916 [2024-07-15 21:20:33.064424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.916 [2024-07-15 21:20:33.064431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.916 [2024-07-15 21:20:33.064444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-15 21:20:33.074344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.916 [2024-07-15 21:20:33.074411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.916 [2024-07-15 21:20:33.074427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.916 [2024-07-15 21:20:33.074434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.916 [2024-07-15 21:20:33.074440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.916 [2024-07-15 21:20:33.074453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-15 21:20:33.084395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.916 [2024-07-15 21:20:33.084459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.916 [2024-07-15 21:20:33.084474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.916 [2024-07-15 21:20:33.084481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.916 [2024-07-15 21:20:33.084491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.916 [2024-07-15 21:20:33.084505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-15 21:20:33.094358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.916 [2024-07-15 21:20:33.094416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.916 [2024-07-15 21:20:33.094431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.916 [2024-07-15 21:20:33.094438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.916 [2024-07-15 21:20:33.094444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.916 [2024-07-15 21:20:33.094458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-15 21:20:33.104416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.916 [2024-07-15 21:20:33.104493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.916 [2024-07-15 21:20:33.104509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.916 [2024-07-15 21:20:33.104516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.916 [2024-07-15 21:20:33.104522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.916 [2024-07-15 21:20:33.104536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-15 21:20:33.114332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.916 [2024-07-15 21:20:33.114396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.916 [2024-07-15 21:20:33.114412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.916 [2024-07-15 21:20:33.114418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.916 [2024-07-15 21:20:33.114424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.916 [2024-07-15 21:20:33.114438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-15 21:20:33.124484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.916 [2024-07-15 21:20:33.124547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.916 [2024-07-15 21:20:33.124562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.916 [2024-07-15 21:20:33.124569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.916 [2024-07-15 21:20:33.124575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.916 [2024-07-15 21:20:33.124588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-15 21:20:33.134524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.916 [2024-07-15 21:20:33.134593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.916 [2024-07-15 21:20:33.134610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.916 [2024-07-15 21:20:33.134617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.916 [2024-07-15 21:20:33.134623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.916 [2024-07-15 21:20:33.134637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-15 21:20:33.144535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.916 [2024-07-15 21:20:33.144600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.916 [2024-07-15 21:20:33.144615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.916 [2024-07-15 21:20:33.144622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.916 [2024-07-15 21:20:33.144628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.916 [2024-07-15 21:20:33.144642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-15 21:20:33.154532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.916 [2024-07-15 21:20:33.154604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.916 [2024-07-15 21:20:33.154621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.916 [2024-07-15 21:20:33.154628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.916 [2024-07-15 21:20:33.154634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.916 [2024-07-15 21:20:33.154647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-15 21:20:33.164593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.916 [2024-07-15 21:20:33.164656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.916 [2024-07-15 21:20:33.164672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.917 [2024-07-15 21:20:33.164678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.917 [2024-07-15 21:20:33.164684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.917 [2024-07-15 21:20:33.164698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-15 21:20:33.174564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.917 [2024-07-15 21:20:33.174626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.917 [2024-07-15 21:20:33.174642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.917 [2024-07-15 21:20:33.174653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.917 [2024-07-15 21:20:33.174659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.917 [2024-07-15 21:20:33.174673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-15 21:20:33.184637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.917 [2024-07-15 21:20:33.184702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.917 [2024-07-15 21:20:33.184717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.917 [2024-07-15 21:20:33.184724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.917 [2024-07-15 21:20:33.184730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.917 [2024-07-15 21:20:33.184743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-15 21:20:33.194694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.917 [2024-07-15 21:20:33.194804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.917 [2024-07-15 21:20:33.194819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.917 [2024-07-15 21:20:33.194826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.917 [2024-07-15 21:20:33.194832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:05.917 [2024-07-15 21:20:33.194845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.917 qpair failed and we were unable to recover it. 00:30:06.178 [2024-07-15 21:20:33.204694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.178 [2024-07-15 21:20:33.204761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.178 [2024-07-15 21:20:33.204777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.178 [2024-07-15 21:20:33.204784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.178 [2024-07-15 21:20:33.204790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.178 [2024-07-15 21:20:33.204803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.178 qpair failed and we were unable to recover it. 00:30:06.178 [2024-07-15 21:20:33.214665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.178 [2024-07-15 21:20:33.214724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.178 [2024-07-15 21:20:33.214739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.178 [2024-07-15 21:20:33.214746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.178 [2024-07-15 21:20:33.214752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.178 [2024-07-15 21:20:33.214766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.178 qpair failed and we were unable to recover it. 00:30:06.178 [2024-07-15 21:20:33.224816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.178 [2024-07-15 21:20:33.224927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.178 [2024-07-15 21:20:33.224943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.178 [2024-07-15 21:20:33.224950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.178 [2024-07-15 21:20:33.224956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.178 [2024-07-15 21:20:33.224969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.178 qpair failed and we were unable to recover it. 00:30:06.178 [2024-07-15 21:20:33.234694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.178 [2024-07-15 21:20:33.234760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.178 [2024-07-15 21:20:33.234775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.178 [2024-07-15 21:20:33.234782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.178 [2024-07-15 21:20:33.234788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.178 [2024-07-15 21:20:33.234802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.178 qpair failed and we were unable to recover it. 00:30:06.178 [2024-07-15 21:20:33.244741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.178 [2024-07-15 21:20:33.244799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.178 [2024-07-15 21:20:33.244814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.178 [2024-07-15 21:20:33.244820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.178 [2024-07-15 21:20:33.244826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.178 [2024-07-15 21:20:33.244840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.178 qpair failed and we were unable to recover it. 00:30:06.178 [2024-07-15 21:20:33.254784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.178 [2024-07-15 21:20:33.254839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.178 [2024-07-15 21:20:33.254855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.178 [2024-07-15 21:20:33.254862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.178 [2024-07-15 21:20:33.254868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.178 [2024-07-15 21:20:33.254882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.178 qpair failed and we were unable to recover it. 00:30:06.178 [2024-07-15 21:20:33.264936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.178 [2024-07-15 21:20:33.265001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.178 [2024-07-15 21:20:33.265016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.178 [2024-07-15 21:20:33.265026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.178 [2024-07-15 21:20:33.265032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.178 [2024-07-15 21:20:33.265046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.178 qpair failed and we were unable to recover it. 00:30:06.178 [2024-07-15 21:20:33.274773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.178 [2024-07-15 21:20:33.274847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.178 [2024-07-15 21:20:33.274872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.178 [2024-07-15 21:20:33.274880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.178 [2024-07-15 21:20:33.274887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.178 [2024-07-15 21:20:33.274905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.178 qpair failed and we were unable to recover it. 00:30:06.178 [2024-07-15 21:20:33.284841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.178 [2024-07-15 21:20:33.284922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.178 [2024-07-15 21:20:33.284947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.178 [2024-07-15 21:20:33.284955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.178 [2024-07-15 21:20:33.284962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.178 [2024-07-15 21:20:33.284980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.178 qpair failed and we were unable to recover it. 00:30:06.178 [2024-07-15 21:20:33.294908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.178 [2024-07-15 21:20:33.294972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.178 [2024-07-15 21:20:33.294997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.178 [2024-07-15 21:20:33.295005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.178 [2024-07-15 21:20:33.295011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.178 [2024-07-15 21:20:33.295029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.178 qpair failed and we were unable to recover it. 00:30:06.178 [2024-07-15 21:20:33.305011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.178 [2024-07-15 21:20:33.305088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.178 [2024-07-15 21:20:33.305112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.178 [2024-07-15 21:20:33.305121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.178 [2024-07-15 21:20:33.305127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.178 [2024-07-15 21:20:33.305146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.178 qpair failed and we were unable to recover it. 00:30:06.178 [2024-07-15 21:20:33.315025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.178 [2024-07-15 21:20:33.315098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.178 [2024-07-15 21:20:33.315115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.178 [2024-07-15 21:20:33.315122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.178 [2024-07-15 21:20:33.315128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.178 [2024-07-15 21:20:33.315143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.178 qpair failed and we were unable to recover it. 00:30:06.178 [2024-07-15 21:20:33.324955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.178 [2024-07-15 21:20:33.325022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.178 [2024-07-15 21:20:33.325038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.178 [2024-07-15 21:20:33.325045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.178 [2024-07-15 21:20:33.325051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.178 [2024-07-15 21:20:33.325064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.178 qpair failed and we were unable to recover it. 00:30:06.178 [2024-07-15 21:20:33.334993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.178 [2024-07-15 21:20:33.335140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.178 [2024-07-15 21:20:33.335155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.178 [2024-07-15 21:20:33.335162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.178 [2024-07-15 21:20:33.335168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.178 [2024-07-15 21:20:33.335181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.178 qpair failed and we were unable to recover it. 00:30:06.178 [2024-07-15 21:20:33.345067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.178 [2024-07-15 21:20:33.345132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.178 [2024-07-15 21:20:33.345147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.178 [2024-07-15 21:20:33.345154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.178 [2024-07-15 21:20:33.345160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.178 [2024-07-15 21:20:33.345174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.178 qpair failed and we were unable to recover it. 00:30:06.178 [2024-07-15 21:20:33.354979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.178 [2024-07-15 21:20:33.355056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.178 [2024-07-15 21:20:33.355071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.178 [2024-07-15 21:20:33.355082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.178 [2024-07-15 21:20:33.355088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.178 [2024-07-15 21:20:33.355102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.178 qpair failed and we were unable to recover it. 00:30:06.178 [2024-07-15 21:20:33.365092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.178 [2024-07-15 21:20:33.365170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.178 [2024-07-15 21:20:33.365185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.178 [2024-07-15 21:20:33.365192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.178 [2024-07-15 21:20:33.365198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.178 [2024-07-15 21:20:33.365212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.178 qpair failed and we were unable to recover it. 00:30:06.178 [2024-07-15 21:20:33.374986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.179 [2024-07-15 21:20:33.375044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.179 [2024-07-15 21:20:33.375060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.179 [2024-07-15 21:20:33.375067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.179 [2024-07-15 21:20:33.375074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.179 [2024-07-15 21:20:33.375088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.179 qpair failed and we were unable to recover it. 00:30:06.179 [2024-07-15 21:20:33.385065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.179 [2024-07-15 21:20:33.385130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.179 [2024-07-15 21:20:33.385148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.179 [2024-07-15 21:20:33.385155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.179 [2024-07-15 21:20:33.385161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.179 [2024-07-15 21:20:33.385176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.179 qpair failed and we were unable to recover it. 00:30:06.179 [2024-07-15 21:20:33.395171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.179 [2024-07-15 21:20:33.395244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.179 [2024-07-15 21:20:33.395260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.179 [2024-07-15 21:20:33.395267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.179 [2024-07-15 21:20:33.395273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.179 [2024-07-15 21:20:33.395287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.179 qpair failed and we were unable to recover it. 00:30:06.179 [2024-07-15 21:20:33.405166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.179 [2024-07-15 21:20:33.405232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.179 [2024-07-15 21:20:33.405247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.179 [2024-07-15 21:20:33.405254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.179 [2024-07-15 21:20:33.405261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.179 [2024-07-15 21:20:33.405274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.179 qpair failed and we were unable to recover it. 00:30:06.179 [2024-07-15 21:20:33.415116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.179 [2024-07-15 21:20:33.415172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.179 [2024-07-15 21:20:33.415187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.179 [2024-07-15 21:20:33.415193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.179 [2024-07-15 21:20:33.415200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.179 [2024-07-15 21:20:33.415213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.179 qpair failed and we were unable to recover it. 00:30:06.179 [2024-07-15 21:20:33.425275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.179 [2024-07-15 21:20:33.425340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.179 [2024-07-15 21:20:33.425355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.179 [2024-07-15 21:20:33.425362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.179 [2024-07-15 21:20:33.425368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.179 [2024-07-15 21:20:33.425382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.179 qpair failed and we were unable to recover it. 00:30:06.179 [2024-07-15 21:20:33.435302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.179 [2024-07-15 21:20:33.435368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.179 [2024-07-15 21:20:33.435383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.179 [2024-07-15 21:20:33.435390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.179 [2024-07-15 21:20:33.435397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.179 [2024-07-15 21:20:33.435410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.179 qpair failed and we were unable to recover it. 00:30:06.179 [2024-07-15 21:20:33.445248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.179 [2024-07-15 21:20:33.445304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.179 [2024-07-15 21:20:33.445322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.179 [2024-07-15 21:20:33.445329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.179 [2024-07-15 21:20:33.445335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.179 [2024-07-15 21:20:33.445349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.179 qpair failed and we were unable to recover it. 00:30:06.179 [2024-07-15 21:20:33.455343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.179 [2024-07-15 21:20:33.455401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.179 [2024-07-15 21:20:33.455416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.179 [2024-07-15 21:20:33.455423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.179 [2024-07-15 21:20:33.455429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.179 [2024-07-15 21:20:33.455443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.179 qpair failed and we were unable to recover it. 00:30:06.179 [2024-07-15 21:20:33.465411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.179 [2024-07-15 21:20:33.465474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.179 [2024-07-15 21:20:33.465489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.179 [2024-07-15 21:20:33.465496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.179 [2024-07-15 21:20:33.465502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.179 [2024-07-15 21:20:33.465515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.179 qpair failed and we were unable to recover it. 00:30:06.440 [2024-07-15 21:20:33.475469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.440 [2024-07-15 21:20:33.475584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.440 [2024-07-15 21:20:33.475600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.440 [2024-07-15 21:20:33.475607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.440 [2024-07-15 21:20:33.475613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.440 [2024-07-15 21:20:33.475627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.440 qpair failed and we were unable to recover it. 00:30:06.440 [2024-07-15 21:20:33.485403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.440 [2024-07-15 21:20:33.485470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.440 [2024-07-15 21:20:33.485485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.440 [2024-07-15 21:20:33.485492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.440 [2024-07-15 21:20:33.485498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.440 [2024-07-15 21:20:33.485512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.440 qpair failed and we were unable to recover it. 00:30:06.440 [2024-07-15 21:20:33.495471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.440 [2024-07-15 21:20:33.495569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.495584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.495591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.495597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.495611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.505538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.505622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.505637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.505644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.505650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.505664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.515558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.515624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.515639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.515646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.515652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.515665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.525534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.525629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.525644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.525651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.525657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.525671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.535581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.535636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.535655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.535662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.535668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.535681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.545665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.545754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.545769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.545776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.545782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.545796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.555533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.555605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.555620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.555628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.555635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.555649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.565658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.565719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.565734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.565741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.565747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.565760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.575692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.575790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.575806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.575813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.575819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.575837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.585768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.585836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.585851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.585858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.585865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.585878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.595714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.595779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.595795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.595802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.595808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.595822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.605761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.605819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.605834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.605841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.605847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.605860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.615775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.615880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.615895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.615902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.615909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.615922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.625845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.625907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.625929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.625936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.625942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.625955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.635845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.635918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.635944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.635952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.635958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.635976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.645749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.645814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.645831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.645838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.645844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.645859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.655882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.655953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.655969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.655976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.655982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.655996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.665954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.666052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.666077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.666086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.666092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.666115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.675938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.676014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.676040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.676048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.676055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.676073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.685976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.686040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.686065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.686073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.686079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.686098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.695986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.696047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.696064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.696072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.696078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.696093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.705952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.706026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.706042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.706048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.706055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.706070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.441 qpair failed and we were unable to recover it. 00:30:06.441 [2024-07-15 21:20:33.716067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.441 [2024-07-15 21:20:33.716127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.441 [2024-07-15 21:20:33.716147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.441 [2024-07-15 21:20:33.716154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.441 [2024-07-15 21:20:33.716160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.441 [2024-07-15 21:20:33.716174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.442 qpair failed and we were unable to recover it. 00:30:06.442 [2024-07-15 21:20:33.726112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.442 [2024-07-15 21:20:33.726221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.442 [2024-07-15 21:20:33.726240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.442 [2024-07-15 21:20:33.726247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.442 [2024-07-15 21:20:33.726253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.442 [2024-07-15 21:20:33.726268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.442 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-15 21:20:33.736104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.703 [2024-07-15 21:20:33.736212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.703 [2024-07-15 21:20:33.736227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.703 [2024-07-15 21:20:33.736239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.703 [2024-07-15 21:20:33.736245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.703 [2024-07-15 21:20:33.736259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-15 21:20:33.746156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.703 [2024-07-15 21:20:33.746226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.703 [2024-07-15 21:20:33.746244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.703 [2024-07-15 21:20:33.746251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.703 [2024-07-15 21:20:33.746258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.703 [2024-07-15 21:20:33.746272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-15 21:20:33.756163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.703 [2024-07-15 21:20:33.756228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.703 [2024-07-15 21:20:33.756247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.703 [2024-07-15 21:20:33.756254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.703 [2024-07-15 21:20:33.756264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.703 [2024-07-15 21:20:33.756280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-15 21:20:33.766086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.703 [2024-07-15 21:20:33.766146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.703 [2024-07-15 21:20:33.766162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.703 [2024-07-15 21:20:33.766169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.703 [2024-07-15 21:20:33.766175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.703 [2024-07-15 21:20:33.766190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-15 21:20:33.776240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.703 [2024-07-15 21:20:33.776341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.703 [2024-07-15 21:20:33.776358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.703 [2024-07-15 21:20:33.776364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.703 [2024-07-15 21:20:33.776371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.703 [2024-07-15 21:20:33.776385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-15 21:20:33.786347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.703 [2024-07-15 21:20:33.786447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.703 [2024-07-15 21:20:33.786462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.703 [2024-07-15 21:20:33.786470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.703 [2024-07-15 21:20:33.786476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.703 [2024-07-15 21:20:33.786490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-15 21:20:33.796279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.703 [2024-07-15 21:20:33.796341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.703 [2024-07-15 21:20:33.796356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.703 [2024-07-15 21:20:33.796363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.703 [2024-07-15 21:20:33.796369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.703 [2024-07-15 21:20:33.796382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-15 21:20:33.806302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.703 [2024-07-15 21:20:33.806385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.703 [2024-07-15 21:20:33.806400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.703 [2024-07-15 21:20:33.806407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.703 [2024-07-15 21:20:33.806413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.703 [2024-07-15 21:20:33.806427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-15 21:20:33.816224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.703 [2024-07-15 21:20:33.816283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.703 [2024-07-15 21:20:33.816299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.703 [2024-07-15 21:20:33.816306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.703 [2024-07-15 21:20:33.816312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.703 [2024-07-15 21:20:33.816326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-15 21:20:33.826402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.703 [2024-07-15 21:20:33.826464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.703 [2024-07-15 21:20:33.826479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.703 [2024-07-15 21:20:33.826486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.703 [2024-07-15 21:20:33.826493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.703 [2024-07-15 21:20:33.826507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-15 21:20:33.836412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.703 [2024-07-15 21:20:33.836494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.703 [2024-07-15 21:20:33.836510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.703 [2024-07-15 21:20:33.836516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.703 [2024-07-15 21:20:33.836522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.703 [2024-07-15 21:20:33.836536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-15 21:20:33.846386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.703 [2024-07-15 21:20:33.846447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.703 [2024-07-15 21:20:33.846462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.704 [2024-07-15 21:20:33.846469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.704 [2024-07-15 21:20:33.846479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.704 [2024-07-15 21:20:33.846492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-15 21:20:33.856496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.704 [2024-07-15 21:20:33.856570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.704 [2024-07-15 21:20:33.856586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.704 [2024-07-15 21:20:33.856593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.704 [2024-07-15 21:20:33.856599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.704 [2024-07-15 21:20:33.856612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-15 21:20:33.866486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.704 [2024-07-15 21:20:33.866546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.704 [2024-07-15 21:20:33.866561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.704 [2024-07-15 21:20:33.866568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.704 [2024-07-15 21:20:33.866575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.704 [2024-07-15 21:20:33.866589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-15 21:20:33.876513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.704 [2024-07-15 21:20:33.876576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.704 [2024-07-15 21:20:33.876592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.704 [2024-07-15 21:20:33.876599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.704 [2024-07-15 21:20:33.876605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.704 [2024-07-15 21:20:33.876618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-15 21:20:33.886526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.704 [2024-07-15 21:20:33.886580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.704 [2024-07-15 21:20:33.886597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.704 [2024-07-15 21:20:33.886605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.704 [2024-07-15 21:20:33.886611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.704 [2024-07-15 21:20:33.886626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-15 21:20:33.896547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.704 [2024-07-15 21:20:33.896612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.704 [2024-07-15 21:20:33.896628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.704 [2024-07-15 21:20:33.896635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.704 [2024-07-15 21:20:33.896641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.704 [2024-07-15 21:20:33.896655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-15 21:20:33.906673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.704 [2024-07-15 21:20:33.906790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.704 [2024-07-15 21:20:33.906805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.704 [2024-07-15 21:20:33.906812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.704 [2024-07-15 21:20:33.906818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.704 [2024-07-15 21:20:33.906832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-15 21:20:33.916496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.704 [2024-07-15 21:20:33.916580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.704 [2024-07-15 21:20:33.916596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.704 [2024-07-15 21:20:33.916603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.704 [2024-07-15 21:20:33.916609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.704 [2024-07-15 21:20:33.916623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-15 21:20:33.926587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.704 [2024-07-15 21:20:33.926649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.704 [2024-07-15 21:20:33.926664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.704 [2024-07-15 21:20:33.926671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.704 [2024-07-15 21:20:33.926677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.704 [2024-07-15 21:20:33.926691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-15 21:20:33.936656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.704 [2024-07-15 21:20:33.936711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.704 [2024-07-15 21:20:33.936726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.704 [2024-07-15 21:20:33.936733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.704 [2024-07-15 21:20:33.936743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.704 [2024-07-15 21:20:33.936757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-15 21:20:33.946741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.704 [2024-07-15 21:20:33.946807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.704 [2024-07-15 21:20:33.946823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.704 [2024-07-15 21:20:33.946830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.704 [2024-07-15 21:20:33.946837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.704 [2024-07-15 21:20:33.946850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-15 21:20:33.956709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.704 [2024-07-15 21:20:33.956774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.704 [2024-07-15 21:20:33.956790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.704 [2024-07-15 21:20:33.956797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.704 [2024-07-15 21:20:33.956803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.704 [2024-07-15 21:20:33.956817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-15 21:20:33.966651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.704 [2024-07-15 21:20:33.966707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.704 [2024-07-15 21:20:33.966722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.704 [2024-07-15 21:20:33.966729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.704 [2024-07-15 21:20:33.966735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.704 [2024-07-15 21:20:33.966748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-15 21:20:33.976661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.704 [2024-07-15 21:20:33.976766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.704 [2024-07-15 21:20:33.976783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.704 [2024-07-15 21:20:33.976790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.704 [2024-07-15 21:20:33.976797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.704 [2024-07-15 21:20:33.976811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-15 21:20:33.986845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.704 [2024-07-15 21:20:33.986909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.704 [2024-07-15 21:20:33.986924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.704 [2024-07-15 21:20:33.986931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.704 [2024-07-15 21:20:33.986937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.704 [2024-07-15 21:20:33.986952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.966 [2024-07-15 21:20:33.996729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.966 [2024-07-15 21:20:33.996797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.966 [2024-07-15 21:20:33.996813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.966 [2024-07-15 21:20:33.996820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.966 [2024-07-15 21:20:33.996826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.966 [2024-07-15 21:20:33.996840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.966 qpair failed and we were unable to recover it. 00:30:06.966 [2024-07-15 21:20:34.006853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.966 [2024-07-15 21:20:34.006913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.966 [2024-07-15 21:20:34.006929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.966 [2024-07-15 21:20:34.006936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.966 [2024-07-15 21:20:34.006942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.966 [2024-07-15 21:20:34.006956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.966 qpair failed and we were unable to recover it. 00:30:06.966 [2024-07-15 21:20:34.016862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.966 [2024-07-15 21:20:34.016924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.966 [2024-07-15 21:20:34.016949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.966 [2024-07-15 21:20:34.016957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.966 [2024-07-15 21:20:34.016964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.966 [2024-07-15 21:20:34.016982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.966 qpair failed and we were unable to recover it. 00:30:06.966 [2024-07-15 21:20:34.026948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.966 [2024-07-15 21:20:34.027023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.966 [2024-07-15 21:20:34.027049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.966 [2024-07-15 21:20:34.027061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.966 [2024-07-15 21:20:34.027068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.967 [2024-07-15 21:20:34.027087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-15 21:20:34.036952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-15 21:20:34.037024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-15 21:20:34.037049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-15 21:20:34.037057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-15 21:20:34.037064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.967 [2024-07-15 21:20:34.037082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-15 21:20:34.046850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-15 21:20:34.046922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-15 21:20:34.046947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-15 21:20:34.046955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-15 21:20:34.046962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.967 [2024-07-15 21:20:34.046980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-15 21:20:34.056873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-15 21:20:34.056933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-15 21:20:34.056950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-15 21:20:34.056957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-15 21:20:34.056963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.967 [2024-07-15 21:20:34.056978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-15 21:20:34.067100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-15 21:20:34.067169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-15 21:20:34.067185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-15 21:20:34.067192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-15 21:20:34.067198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.967 [2024-07-15 21:20:34.067213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-15 21:20:34.077031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-15 21:20:34.077098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-15 21:20:34.077114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-15 21:20:34.077121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-15 21:20:34.077127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.967 [2024-07-15 21:20:34.077142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-15 21:20:34.087072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-15 21:20:34.087127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-15 21:20:34.087143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-15 21:20:34.087150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-15 21:20:34.087156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.967 [2024-07-15 21:20:34.087170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-15 21:20:34.096975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-15 21:20:34.097036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-15 21:20:34.097051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-15 21:20:34.097058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-15 21:20:34.097064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.967 [2024-07-15 21:20:34.097078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-15 21:20:34.107111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-15 21:20:34.107179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-15 21:20:34.107194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-15 21:20:34.107201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-15 21:20:34.107207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.967 [2024-07-15 21:20:34.107221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-15 21:20:34.117145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-15 21:20:34.117247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-15 21:20:34.117263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-15 21:20:34.117273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-15 21:20:34.117280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.967 [2024-07-15 21:20:34.117294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-15 21:20:34.127169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-15 21:20:34.127225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-15 21:20:34.127251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-15 21:20:34.127261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-15 21:20:34.127270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.967 [2024-07-15 21:20:34.127290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-15 21:20:34.137201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-15 21:20:34.137269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-15 21:20:34.137286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-15 21:20:34.137294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-15 21:20:34.137300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.967 [2024-07-15 21:20:34.137315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-15 21:20:34.147254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-15 21:20:34.147320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-15 21:20:34.147336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-15 21:20:34.147343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-15 21:20:34.147349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.967 [2024-07-15 21:20:34.147363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-15 21:20:34.157317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-15 21:20:34.157389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-15 21:20:34.157410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-15 21:20:34.157418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-15 21:20:34.157424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.967 [2024-07-15 21:20:34.157441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-15 21:20:34.167278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-15 21:20:34.167341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-15 21:20:34.167359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-15 21:20:34.167366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-15 21:20:34.167372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.967 [2024-07-15 21:20:34.167387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.968 [2024-07-15 21:20:34.177300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.968 [2024-07-15 21:20:34.177359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.968 [2024-07-15 21:20:34.177375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.968 [2024-07-15 21:20:34.177382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.968 [2024-07-15 21:20:34.177388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.968 [2024-07-15 21:20:34.177402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.968 qpair failed and we were unable to recover it. 00:30:06.968 [2024-07-15 21:20:34.187370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.968 [2024-07-15 21:20:34.187436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.968 [2024-07-15 21:20:34.187452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.968 [2024-07-15 21:20:34.187459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.968 [2024-07-15 21:20:34.187465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.968 [2024-07-15 21:20:34.187480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.968 qpair failed and we were unable to recover it. 00:30:06.968 [2024-07-15 21:20:34.197370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.968 [2024-07-15 21:20:34.197437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.968 [2024-07-15 21:20:34.197453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.968 [2024-07-15 21:20:34.197460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.968 [2024-07-15 21:20:34.197466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.968 [2024-07-15 21:20:34.197480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.968 qpair failed and we were unable to recover it. 00:30:06.968 [2024-07-15 21:20:34.207394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.968 [2024-07-15 21:20:34.207453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.968 [2024-07-15 21:20:34.207472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.968 [2024-07-15 21:20:34.207479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.968 [2024-07-15 21:20:34.207485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.968 [2024-07-15 21:20:34.207499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.968 qpair failed and we were unable to recover it. 00:30:06.968 [2024-07-15 21:20:34.217410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.968 [2024-07-15 21:20:34.217472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.968 [2024-07-15 21:20:34.217488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.968 [2024-07-15 21:20:34.217494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.968 [2024-07-15 21:20:34.217501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.968 [2024-07-15 21:20:34.217514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.968 qpair failed and we were unable to recover it. 00:30:06.968 [2024-07-15 21:20:34.227476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.968 [2024-07-15 21:20:34.227574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.968 [2024-07-15 21:20:34.227589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.968 [2024-07-15 21:20:34.227596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.968 [2024-07-15 21:20:34.227602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.968 [2024-07-15 21:20:34.227616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.968 qpair failed and we were unable to recover it. 00:30:06.968 [2024-07-15 21:20:34.237473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.968 [2024-07-15 21:20:34.237540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.968 [2024-07-15 21:20:34.237555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.968 [2024-07-15 21:20:34.237562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.968 [2024-07-15 21:20:34.237568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.968 [2024-07-15 21:20:34.237581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.968 qpair failed and we were unable to recover it. 00:30:06.968 [2024-07-15 21:20:34.247504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.968 [2024-07-15 21:20:34.247567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.968 [2024-07-15 21:20:34.247582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.968 [2024-07-15 21:20:34.247589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.968 [2024-07-15 21:20:34.247595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:06.968 [2024-07-15 21:20:34.247608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.968 qpair failed and we were unable to recover it. 00:30:07.230 [2024-07-15 21:20:34.257556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.230 [2024-07-15 21:20:34.257619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.230 [2024-07-15 21:20:34.257635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.230 [2024-07-15 21:20:34.257641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.230 [2024-07-15 21:20:34.257647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.230 [2024-07-15 21:20:34.257661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.230 qpair failed and we were unable to recover it. 00:30:07.230 [2024-07-15 21:20:34.267499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.230 [2024-07-15 21:20:34.267565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.230 [2024-07-15 21:20:34.267582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.230 [2024-07-15 21:20:34.267589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.230 [2024-07-15 21:20:34.267595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.230 [2024-07-15 21:20:34.267609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.230 qpair failed and we were unable to recover it. 00:30:07.230 [2024-07-15 21:20:34.277588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.230 [2024-07-15 21:20:34.277686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.230 [2024-07-15 21:20:34.277702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.230 [2024-07-15 21:20:34.277709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.230 [2024-07-15 21:20:34.277715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.230 [2024-07-15 21:20:34.277728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.230 qpair failed and we were unable to recover it. 00:30:07.230 [2024-07-15 21:20:34.287681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.230 [2024-07-15 21:20:34.287750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.230 [2024-07-15 21:20:34.287766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.230 [2024-07-15 21:20:34.287773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.230 [2024-07-15 21:20:34.287779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.230 [2024-07-15 21:20:34.287793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.230 qpair failed and we were unable to recover it. 00:30:07.230 [2024-07-15 21:20:34.297641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.230 [2024-07-15 21:20:34.297698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.230 [2024-07-15 21:20:34.297717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.230 [2024-07-15 21:20:34.297724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.230 [2024-07-15 21:20:34.297730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.230 [2024-07-15 21:20:34.297743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.230 qpair failed and we were unable to recover it. 00:30:07.230 [2024-07-15 21:20:34.307717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.230 [2024-07-15 21:20:34.307779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.230 [2024-07-15 21:20:34.307794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.230 [2024-07-15 21:20:34.307801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.230 [2024-07-15 21:20:34.307807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.230 [2024-07-15 21:20:34.307820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.230 qpair failed and we were unable to recover it. 00:30:07.230 [2024-07-15 21:20:34.317683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.230 [2024-07-15 21:20:34.317745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.230 [2024-07-15 21:20:34.317760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.230 [2024-07-15 21:20:34.317767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.230 [2024-07-15 21:20:34.317773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.231 [2024-07-15 21:20:34.317786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.231 qpair failed and we were unable to recover it. 00:30:07.231 [2024-07-15 21:20:34.327714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.231 [2024-07-15 21:20:34.327770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.231 [2024-07-15 21:20:34.327786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.231 [2024-07-15 21:20:34.327792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.231 [2024-07-15 21:20:34.327798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.231 [2024-07-15 21:20:34.327812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.231 qpair failed and we were unable to recover it. 00:30:07.231 [2024-07-15 21:20:34.337742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.231 [2024-07-15 21:20:34.337802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.231 [2024-07-15 21:20:34.337817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.231 [2024-07-15 21:20:34.337824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.231 [2024-07-15 21:20:34.337830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.231 [2024-07-15 21:20:34.337847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.231 qpair failed and we were unable to recover it. 00:30:07.231 [2024-07-15 21:20:34.347808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.231 [2024-07-15 21:20:34.347910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.231 [2024-07-15 21:20:34.347925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.231 [2024-07-15 21:20:34.347932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.231 [2024-07-15 21:20:34.347938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.231 [2024-07-15 21:20:34.347951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.231 qpair failed and we were unable to recover it. 00:30:07.231 [2024-07-15 21:20:34.357782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.231 [2024-07-15 21:20:34.357875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.231 [2024-07-15 21:20:34.357890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.231 [2024-07-15 21:20:34.357897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.231 [2024-07-15 21:20:34.357903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.231 [2024-07-15 21:20:34.357916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.231 qpair failed and we were unable to recover it. 00:30:07.231 [2024-07-15 21:20:34.367845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.231 [2024-07-15 21:20:34.367904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.231 [2024-07-15 21:20:34.367920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.231 [2024-07-15 21:20:34.367927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.231 [2024-07-15 21:20:34.367933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.231 [2024-07-15 21:20:34.367946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.231 qpair failed and we were unable to recover it. 00:30:07.231 [2024-07-15 21:20:34.377883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.231 [2024-07-15 21:20:34.377952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.231 [2024-07-15 21:20:34.377977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.231 [2024-07-15 21:20:34.377985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.231 [2024-07-15 21:20:34.377992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.231 [2024-07-15 21:20:34.378010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.231 qpair failed and we were unable to recover it. 00:30:07.231 [2024-07-15 21:20:34.387960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.231 [2024-07-15 21:20:34.388063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.231 [2024-07-15 21:20:34.388095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.231 [2024-07-15 21:20:34.388104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.231 [2024-07-15 21:20:34.388111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.231 [2024-07-15 21:20:34.388129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.231 qpair failed and we were unable to recover it. 00:30:07.231 [2024-07-15 21:20:34.397825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.231 [2024-07-15 21:20:34.397894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.231 [2024-07-15 21:20:34.397912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.231 [2024-07-15 21:20:34.397920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.231 [2024-07-15 21:20:34.397926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.231 [2024-07-15 21:20:34.397942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.231 qpair failed and we were unable to recover it. 00:30:07.231 [2024-07-15 21:20:34.407936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.231 [2024-07-15 21:20:34.408009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.231 [2024-07-15 21:20:34.408026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.231 [2024-07-15 21:20:34.408033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.231 [2024-07-15 21:20:34.408039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.231 [2024-07-15 21:20:34.408053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.231 qpair failed and we were unable to recover it. 00:30:07.231 [2024-07-15 21:20:34.417962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.231 [2024-07-15 21:20:34.418026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.231 [2024-07-15 21:20:34.418051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.231 [2024-07-15 21:20:34.418059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.231 [2024-07-15 21:20:34.418065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.231 [2024-07-15 21:20:34.418085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.231 qpair failed and we were unable to recover it. 00:30:07.231 [2024-07-15 21:20:34.428002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.231 [2024-07-15 21:20:34.428065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.231 [2024-07-15 21:20:34.428083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.231 [2024-07-15 21:20:34.428090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.231 [2024-07-15 21:20:34.428096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.231 [2024-07-15 21:20:34.428115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.231 qpair failed and we were unable to recover it. 00:30:07.231 [2024-07-15 21:20:34.437889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.231 [2024-07-15 21:20:34.437955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.231 [2024-07-15 21:20:34.437971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.231 [2024-07-15 21:20:34.437978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.231 [2024-07-15 21:20:34.437984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.231 [2024-07-15 21:20:34.437998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.231 qpair failed and we were unable to recover it. 00:30:07.231 [2024-07-15 21:20:34.448027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.231 [2024-07-15 21:20:34.448096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.231 [2024-07-15 21:20:34.448112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.231 [2024-07-15 21:20:34.448119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.231 [2024-07-15 21:20:34.448125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.231 [2024-07-15 21:20:34.448138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.231 qpair failed and we were unable to recover it. 00:30:07.231 [2024-07-15 21:20:34.458068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.231 [2024-07-15 21:20:34.458128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.231 [2024-07-15 21:20:34.458143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.231 [2024-07-15 21:20:34.458150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.231 [2024-07-15 21:20:34.458156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.231 [2024-07-15 21:20:34.458169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.231 qpair failed and we were unable to recover it. 00:30:07.231 [2024-07-15 21:20:34.468039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.231 [2024-07-15 21:20:34.468101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.232 [2024-07-15 21:20:34.468117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.232 [2024-07-15 21:20:34.468123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.232 [2024-07-15 21:20:34.468130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.232 [2024-07-15 21:20:34.468144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.232 qpair failed and we were unable to recover it. 00:30:07.232 [2024-07-15 21:20:34.478120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.232 [2024-07-15 21:20:34.478190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.232 [2024-07-15 21:20:34.478209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.232 [2024-07-15 21:20:34.478216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.232 [2024-07-15 21:20:34.478222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.232 [2024-07-15 21:20:34.478240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.232 qpair failed and we were unable to recover it. 00:30:07.232 [2024-07-15 21:20:34.488032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.232 [2024-07-15 21:20:34.488089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.232 [2024-07-15 21:20:34.488104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.232 [2024-07-15 21:20:34.488111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.232 [2024-07-15 21:20:34.488117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.232 [2024-07-15 21:20:34.488130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.232 qpair failed and we were unable to recover it. 00:30:07.232 [2024-07-15 21:20:34.498244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.232 [2024-07-15 21:20:34.498310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.232 [2024-07-15 21:20:34.498326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.232 [2024-07-15 21:20:34.498332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.232 [2024-07-15 21:20:34.498338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.232 [2024-07-15 21:20:34.498352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.232 qpair failed and we were unable to recover it. 00:30:07.232 [2024-07-15 21:20:34.508248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.232 [2024-07-15 21:20:34.508311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.232 [2024-07-15 21:20:34.508326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.232 [2024-07-15 21:20:34.508333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.232 [2024-07-15 21:20:34.508339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.232 [2024-07-15 21:20:34.508352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.232 qpair failed and we were unable to recover it. 00:30:07.232 [2024-07-15 21:20:34.518217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.232 [2024-07-15 21:20:34.518293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.232 [2024-07-15 21:20:34.518309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.232 [2024-07-15 21:20:34.518315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.232 [2024-07-15 21:20:34.518325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.232 [2024-07-15 21:20:34.518339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.232 qpair failed and we were unable to recover it. 00:30:07.494 [2024-07-15 21:20:34.528263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.494 [2024-07-15 21:20:34.528321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.494 [2024-07-15 21:20:34.528337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.494 [2024-07-15 21:20:34.528343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.494 [2024-07-15 21:20:34.528349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.494 [2024-07-15 21:20:34.528363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.494 qpair failed and we were unable to recover it. 00:30:07.494 [2024-07-15 21:20:34.538339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.494 [2024-07-15 21:20:34.538407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.494 [2024-07-15 21:20:34.538422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.494 [2024-07-15 21:20:34.538429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.494 [2024-07-15 21:20:34.538435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.494 [2024-07-15 21:20:34.538448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.494 qpair failed and we were unable to recover it. 00:30:07.494 [2024-07-15 21:20:34.548435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.494 [2024-07-15 21:20:34.548497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.494 [2024-07-15 21:20:34.548512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.494 [2024-07-15 21:20:34.548519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.494 [2024-07-15 21:20:34.548525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.494 [2024-07-15 21:20:34.548539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.494 qpair failed and we were unable to recover it. 00:30:07.494 [2024-07-15 21:20:34.558256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.494 [2024-07-15 21:20:34.558364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.494 [2024-07-15 21:20:34.558380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.494 [2024-07-15 21:20:34.558387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.494 [2024-07-15 21:20:34.558392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.494 [2024-07-15 21:20:34.558406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.494 qpair failed and we were unable to recover it. 00:30:07.494 [2024-07-15 21:20:34.568364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.494 [2024-07-15 21:20:34.568426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.494 [2024-07-15 21:20:34.568441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.494 [2024-07-15 21:20:34.568448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.494 [2024-07-15 21:20:34.568454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.494 [2024-07-15 21:20:34.568468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.494 qpair failed and we were unable to recover it. 00:30:07.494 [2024-07-15 21:20:34.578389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.494 [2024-07-15 21:20:34.578502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.494 [2024-07-15 21:20:34.578517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.494 [2024-07-15 21:20:34.578524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.494 [2024-07-15 21:20:34.578530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.494 [2024-07-15 21:20:34.578544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.494 qpair failed and we were unable to recover it. 00:30:07.494 [2024-07-15 21:20:34.588459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.494 [2024-07-15 21:20:34.588526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.494 [2024-07-15 21:20:34.588542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.494 [2024-07-15 21:20:34.588548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.494 [2024-07-15 21:20:34.588554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.494 [2024-07-15 21:20:34.588568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.494 qpair failed and we were unable to recover it. 00:30:07.494 [2024-07-15 21:20:34.598419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.494 [2024-07-15 21:20:34.598495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.494 [2024-07-15 21:20:34.598510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.495 [2024-07-15 21:20:34.598516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.495 [2024-07-15 21:20:34.598523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.495 [2024-07-15 21:20:34.598536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.495 qpair failed and we were unable to recover it. 00:30:07.495 [2024-07-15 21:20:34.608506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.495 [2024-07-15 21:20:34.608567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.495 [2024-07-15 21:20:34.608582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.495 [2024-07-15 21:20:34.608588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.495 [2024-07-15 21:20:34.608598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.495 [2024-07-15 21:20:34.608611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.495 qpair failed and we were unable to recover it. 00:30:07.495 [2024-07-15 21:20:34.618471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.495 [2024-07-15 21:20:34.618534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.495 [2024-07-15 21:20:34.618549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.495 [2024-07-15 21:20:34.618556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.495 [2024-07-15 21:20:34.618562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.495 [2024-07-15 21:20:34.618576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.495 qpair failed and we were unable to recover it. 00:30:07.495 [2024-07-15 21:20:34.628571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.495 [2024-07-15 21:20:34.628633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.495 [2024-07-15 21:20:34.628648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.495 [2024-07-15 21:20:34.628655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.495 [2024-07-15 21:20:34.628661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.495 [2024-07-15 21:20:34.628675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.495 qpair failed and we were unable to recover it. 00:30:07.495 [2024-07-15 21:20:34.638591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.495 [2024-07-15 21:20:34.638666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.495 [2024-07-15 21:20:34.638683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.495 [2024-07-15 21:20:34.638690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.495 [2024-07-15 21:20:34.638696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.495 [2024-07-15 21:20:34.638710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.495 qpair failed and we were unable to recover it. 00:30:07.495 [2024-07-15 21:20:34.648618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.495 [2024-07-15 21:20:34.648689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.495 [2024-07-15 21:20:34.648708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.495 [2024-07-15 21:20:34.648715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.495 [2024-07-15 21:20:34.648721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.495 [2024-07-15 21:20:34.648735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.495 qpair failed and we were unable to recover it. 00:30:07.495 [2024-07-15 21:20:34.658583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.495 [2024-07-15 21:20:34.658648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.495 [2024-07-15 21:20:34.658665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.495 [2024-07-15 21:20:34.658672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.495 [2024-07-15 21:20:34.658677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.495 [2024-07-15 21:20:34.658691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.495 qpair failed and we were unable to recover it. 00:30:07.495 [2024-07-15 21:20:34.668746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.495 [2024-07-15 21:20:34.668812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.495 [2024-07-15 21:20:34.668828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.495 [2024-07-15 21:20:34.668834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.495 [2024-07-15 21:20:34.668840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.495 [2024-07-15 21:20:34.668854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.495 qpair failed and we were unable to recover it. 00:30:07.495 [2024-07-15 21:20:34.678676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.495 [2024-07-15 21:20:34.678738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.495 [2024-07-15 21:20:34.678754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.495 [2024-07-15 21:20:34.678760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.495 [2024-07-15 21:20:34.678767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.495 [2024-07-15 21:20:34.678780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.495 qpair failed and we were unable to recover it. 00:30:07.495 [2024-07-15 21:20:34.688701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.495 [2024-07-15 21:20:34.688759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.495 [2024-07-15 21:20:34.688775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.495 [2024-07-15 21:20:34.688782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.495 [2024-07-15 21:20:34.688788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.495 [2024-07-15 21:20:34.688802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.495 qpair failed and we were unable to recover it. 00:30:07.495 [2024-07-15 21:20:34.698792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.495 [2024-07-15 21:20:34.698870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.495 [2024-07-15 21:20:34.698885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.495 [2024-07-15 21:20:34.698892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.495 [2024-07-15 21:20:34.698902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.495 [2024-07-15 21:20:34.698915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.495 qpair failed and we were unable to recover it. 00:30:07.495 [2024-07-15 21:20:34.708785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.495 [2024-07-15 21:20:34.708850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.495 [2024-07-15 21:20:34.708866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.495 [2024-07-15 21:20:34.708873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.495 [2024-07-15 21:20:34.708879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.495 [2024-07-15 21:20:34.708893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.495 qpair failed and we were unable to recover it. 00:30:07.495 [2024-07-15 21:20:34.718777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.495 [2024-07-15 21:20:34.718847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.495 [2024-07-15 21:20:34.718862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.495 [2024-07-15 21:20:34.718869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.495 [2024-07-15 21:20:34.718875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.495 [2024-07-15 21:20:34.718888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.495 qpair failed and we were unable to recover it. 00:30:07.495 [2024-07-15 21:20:34.728821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.495 [2024-07-15 21:20:34.728877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.495 [2024-07-15 21:20:34.728893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.495 [2024-07-15 21:20:34.728899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.495 [2024-07-15 21:20:34.728905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.495 [2024-07-15 21:20:34.728919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.495 qpair failed and we were unable to recover it. 00:30:07.495 [2024-07-15 21:20:34.738853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.495 [2024-07-15 21:20:34.738909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.495 [2024-07-15 21:20:34.738925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.495 [2024-07-15 21:20:34.738932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.495 [2024-07-15 21:20:34.738938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.495 [2024-07-15 21:20:34.738952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.495 qpair failed and we were unable to recover it. 00:30:07.496 [2024-07-15 21:20:34.748913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.496 [2024-07-15 21:20:34.748978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.496 [2024-07-15 21:20:34.748993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.496 [2024-07-15 21:20:34.749000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.496 [2024-07-15 21:20:34.749006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.496 [2024-07-15 21:20:34.749020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.496 qpair failed and we were unable to recover it. 00:30:07.496 [2024-07-15 21:20:34.758902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.496 [2024-07-15 21:20:34.758962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.496 [2024-07-15 21:20:34.758977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.496 [2024-07-15 21:20:34.758984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.496 [2024-07-15 21:20:34.758990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.496 [2024-07-15 21:20:34.759003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.496 qpair failed and we were unable to recover it. 00:30:07.496 [2024-07-15 21:20:34.768952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.496 [2024-07-15 21:20:34.769017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.496 [2024-07-15 21:20:34.769042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.496 [2024-07-15 21:20:34.769050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.496 [2024-07-15 21:20:34.769057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.496 [2024-07-15 21:20:34.769076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.496 qpair failed and we were unable to recover it. 00:30:07.496 [2024-07-15 21:20:34.778969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.496 [2024-07-15 21:20:34.779031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.496 [2024-07-15 21:20:34.779056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.496 [2024-07-15 21:20:34.779064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.496 [2024-07-15 21:20:34.779071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.496 [2024-07-15 21:20:34.779089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.496 qpair failed and we were unable to recover it. 00:30:07.758 [2024-07-15 21:20:34.789028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.758 [2024-07-15 21:20:34.789099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.758 [2024-07-15 21:20:34.789124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.758 [2024-07-15 21:20:34.789137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.758 [2024-07-15 21:20:34.789144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.758 [2024-07-15 21:20:34.789162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.758 qpair failed and we were unable to recover it. 00:30:07.758 [2024-07-15 21:20:34.798986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.758 [2024-07-15 21:20:34.799052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.758 [2024-07-15 21:20:34.799069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.758 [2024-07-15 21:20:34.799077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.758 [2024-07-15 21:20:34.799083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.758 [2024-07-15 21:20:34.799097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.758 qpair failed and we were unable to recover it. 00:30:07.758 [2024-07-15 21:20:34.809099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.758 [2024-07-15 21:20:34.809187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.758 [2024-07-15 21:20:34.809214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.758 [2024-07-15 21:20:34.809222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.758 [2024-07-15 21:20:34.809234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.758 [2024-07-15 21:20:34.809254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.758 qpair failed and we were unable to recover it. 00:30:07.758 [2024-07-15 21:20:34.819025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.758 [2024-07-15 21:20:34.819087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.758 [2024-07-15 21:20:34.819105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.758 [2024-07-15 21:20:34.819112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.758 [2024-07-15 21:20:34.819118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.758 [2024-07-15 21:20:34.819133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.758 qpair failed and we were unable to recover it. 00:30:07.758 [2024-07-15 21:20:34.829095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.758 [2024-07-15 21:20:34.829159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.758 [2024-07-15 21:20:34.829174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.758 [2024-07-15 21:20:34.829181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.758 [2024-07-15 21:20:34.829187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.758 [2024-07-15 21:20:34.829202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.758 qpair failed and we were unable to recover it. 00:30:07.758 [2024-07-15 21:20:34.839114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.758 [2024-07-15 21:20:34.839178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.758 [2024-07-15 21:20:34.839193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.758 [2024-07-15 21:20:34.839200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.758 [2024-07-15 21:20:34.839206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.758 [2024-07-15 21:20:34.839220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.758 qpair failed and we were unable to recover it. 00:30:07.758 [2024-07-15 21:20:34.849145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.758 [2024-07-15 21:20:34.849203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.758 [2024-07-15 21:20:34.849217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.758 [2024-07-15 21:20:34.849224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.758 [2024-07-15 21:20:34.849234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.758 [2024-07-15 21:20:34.849248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.758 qpair failed and we were unable to recover it. 00:30:07.758 [2024-07-15 21:20:34.859177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.758 [2024-07-15 21:20:34.859233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.758 [2024-07-15 21:20:34.859248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.759 [2024-07-15 21:20:34.859255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.759 [2024-07-15 21:20:34.859261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.759 [2024-07-15 21:20:34.859275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.759 qpair failed and we were unable to recover it. 00:30:07.759 [2024-07-15 21:20:34.869240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.759 [2024-07-15 21:20:34.869304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.759 [2024-07-15 21:20:34.869319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.759 [2024-07-15 21:20:34.869326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.759 [2024-07-15 21:20:34.869332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.759 [2024-07-15 21:20:34.869346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.759 qpair failed and we were unable to recover it. 00:30:07.759 [2024-07-15 21:20:34.879204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.759 [2024-07-15 21:20:34.879280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.759 [2024-07-15 21:20:34.879297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.759 [2024-07-15 21:20:34.879308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.759 [2024-07-15 21:20:34.879315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.759 [2024-07-15 21:20:34.879329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.759 qpair failed and we were unable to recover it. 00:30:07.759 [2024-07-15 21:20:34.889258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.759 [2024-07-15 21:20:34.889318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.759 [2024-07-15 21:20:34.889335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.759 [2024-07-15 21:20:34.889343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.759 [2024-07-15 21:20:34.889349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.759 [2024-07-15 21:20:34.889364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.759 qpair failed and we were unable to recover it. 00:30:07.759 [2024-07-15 21:20:34.899280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.759 [2024-07-15 21:20:34.899332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.759 [2024-07-15 21:20:34.899348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.759 [2024-07-15 21:20:34.899355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.759 [2024-07-15 21:20:34.899361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.759 [2024-07-15 21:20:34.899375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.759 qpair failed and we were unable to recover it. 00:30:07.759 [2024-07-15 21:20:34.909361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.759 [2024-07-15 21:20:34.909444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.759 [2024-07-15 21:20:34.909459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.759 [2024-07-15 21:20:34.909466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.759 [2024-07-15 21:20:34.909472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.759 [2024-07-15 21:20:34.909486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.759 qpair failed and we were unable to recover it. 00:30:07.759 [2024-07-15 21:20:34.919316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.759 [2024-07-15 21:20:34.919432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.759 [2024-07-15 21:20:34.919447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.759 [2024-07-15 21:20:34.919454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.759 [2024-07-15 21:20:34.919461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.759 [2024-07-15 21:20:34.919474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.759 qpair failed and we were unable to recover it. 00:30:07.759 [2024-07-15 21:20:34.929379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.759 [2024-07-15 21:20:34.929480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.759 [2024-07-15 21:20:34.929498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.759 [2024-07-15 21:20:34.929505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.759 [2024-07-15 21:20:34.929511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.759 [2024-07-15 21:20:34.929526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.759 qpair failed and we were unable to recover it. 00:30:07.759 [2024-07-15 21:20:34.939383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.759 [2024-07-15 21:20:34.939442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.759 [2024-07-15 21:20:34.939457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.759 [2024-07-15 21:20:34.939464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.759 [2024-07-15 21:20:34.939470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.759 [2024-07-15 21:20:34.939484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.759 qpair failed and we were unable to recover it. 00:30:07.759 [2024-07-15 21:20:34.949465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.759 [2024-07-15 21:20:34.949527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.759 [2024-07-15 21:20:34.949543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.759 [2024-07-15 21:20:34.949549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.759 [2024-07-15 21:20:34.949555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.759 [2024-07-15 21:20:34.949569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.759 qpair failed and we were unable to recover it. 00:30:07.759 [2024-07-15 21:20:34.959445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.759 [2024-07-15 21:20:34.959517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.759 [2024-07-15 21:20:34.959532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.759 [2024-07-15 21:20:34.959539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.759 [2024-07-15 21:20:34.959545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.759 [2024-07-15 21:20:34.959558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.759 qpair failed and we were unable to recover it. 00:30:07.759 [2024-07-15 21:20:34.969501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.759 [2024-07-15 21:20:34.969557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.759 [2024-07-15 21:20:34.969572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.759 [2024-07-15 21:20:34.969583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.759 [2024-07-15 21:20:34.969589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.759 [2024-07-15 21:20:34.969603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.759 qpair failed and we were unable to recover it. 00:30:07.759 [2024-07-15 21:20:34.979384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.759 [2024-07-15 21:20:34.979447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.759 [2024-07-15 21:20:34.979462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.759 [2024-07-15 21:20:34.979469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.759 [2024-07-15 21:20:34.979475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.759 [2024-07-15 21:20:34.979488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.759 qpair failed and we were unable to recover it. 00:30:07.759 [2024-07-15 21:20:34.989567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.759 [2024-07-15 21:20:34.989649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.759 [2024-07-15 21:20:34.989664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.759 [2024-07-15 21:20:34.989671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.759 [2024-07-15 21:20:34.989677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.759 [2024-07-15 21:20:34.989691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.759 qpair failed and we were unable to recover it. 00:30:07.759 [2024-07-15 21:20:34.999560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.759 [2024-07-15 21:20:34.999620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.759 [2024-07-15 21:20:34.999635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.759 [2024-07-15 21:20:34.999642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.759 [2024-07-15 21:20:34.999648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.759 [2024-07-15 21:20:34.999661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.760 qpair failed and we were unable to recover it. 00:30:07.760 [2024-07-15 21:20:35.009563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.760 [2024-07-15 21:20:35.009635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.760 [2024-07-15 21:20:35.009650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.760 [2024-07-15 21:20:35.009657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.760 [2024-07-15 21:20:35.009663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.760 [2024-07-15 21:20:35.009677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.760 qpair failed and we were unable to recover it. 00:30:07.760 [2024-07-15 21:20:35.019579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.760 [2024-07-15 21:20:35.019645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.760 [2024-07-15 21:20:35.019661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.760 [2024-07-15 21:20:35.019668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.760 [2024-07-15 21:20:35.019674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.760 [2024-07-15 21:20:35.019687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.760 qpair failed and we were unable to recover it. 00:30:07.760 [2024-07-15 21:20:35.029675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.760 [2024-07-15 21:20:35.029741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.760 [2024-07-15 21:20:35.029756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.760 [2024-07-15 21:20:35.029763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.760 [2024-07-15 21:20:35.029769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.760 [2024-07-15 21:20:35.029783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.760 qpair failed and we were unable to recover it. 00:30:07.760 [2024-07-15 21:20:35.039649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.760 [2024-07-15 21:20:35.039713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.760 [2024-07-15 21:20:35.039728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.760 [2024-07-15 21:20:35.039735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.760 [2024-07-15 21:20:35.039741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:07.760 [2024-07-15 21:20:35.039754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.760 qpair failed and we were unable to recover it. 00:30:08.023 [2024-07-15 21:20:35.049692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.023 [2024-07-15 21:20:35.049751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.023 [2024-07-15 21:20:35.049766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.023 [2024-07-15 21:20:35.049772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.023 [2024-07-15 21:20:35.049778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.023 [2024-07-15 21:20:35.049792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.023 qpair failed and we were unable to recover it. 00:30:08.023 [2024-07-15 21:20:35.059714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.023 [2024-07-15 21:20:35.059772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.023 [2024-07-15 21:20:35.059791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.023 [2024-07-15 21:20:35.059799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.023 [2024-07-15 21:20:35.059805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.023 [2024-07-15 21:20:35.059818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.023 qpair failed and we were unable to recover it. 00:30:08.023 [2024-07-15 21:20:35.069755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.023 [2024-07-15 21:20:35.069859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.023 [2024-07-15 21:20:35.069875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.023 [2024-07-15 21:20:35.069882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.023 [2024-07-15 21:20:35.069888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.023 [2024-07-15 21:20:35.069902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.023 qpair failed and we were unable to recover it. 00:30:08.023 [2024-07-15 21:20:35.079765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.023 [2024-07-15 21:20:35.079828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.023 [2024-07-15 21:20:35.079843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.023 [2024-07-15 21:20:35.079850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.023 [2024-07-15 21:20:35.079856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.023 [2024-07-15 21:20:35.079870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.023 qpair failed and we were unable to recover it. 00:30:08.023 [2024-07-15 21:20:35.089787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.023 [2024-07-15 21:20:35.089845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.023 [2024-07-15 21:20:35.089861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.023 [2024-07-15 21:20:35.089868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.023 [2024-07-15 21:20:35.089874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.023 [2024-07-15 21:20:35.089888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.023 qpair failed and we were unable to recover it. 00:30:08.023 [2024-07-15 21:20:35.099855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.023 [2024-07-15 21:20:35.099938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.023 [2024-07-15 21:20:35.099953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.023 [2024-07-15 21:20:35.099960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.023 [2024-07-15 21:20:35.099967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.023 [2024-07-15 21:20:35.099984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.023 qpair failed and we were unable to recover it. 00:30:08.023 [2024-07-15 21:20:35.109865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.023 [2024-07-15 21:20:35.109931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.023 [2024-07-15 21:20:35.109946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.023 [2024-07-15 21:20:35.109953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.023 [2024-07-15 21:20:35.109959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.023 [2024-07-15 21:20:35.109973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.023 qpair failed and we were unable to recover it. 00:30:08.023 [2024-07-15 21:20:35.119839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.023 [2024-07-15 21:20:35.119909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.023 [2024-07-15 21:20:35.119934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.024 [2024-07-15 21:20:35.119942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.024 [2024-07-15 21:20:35.119949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.024 [2024-07-15 21:20:35.119967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.024 qpair failed and we were unable to recover it. 00:30:08.024 [2024-07-15 21:20:35.129907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.024 [2024-07-15 21:20:35.129968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.024 [2024-07-15 21:20:35.129992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.024 [2024-07-15 21:20:35.130001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.024 [2024-07-15 21:20:35.130007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.024 [2024-07-15 21:20:35.130026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.024 qpair failed and we were unable to recover it. 00:30:08.024 [2024-07-15 21:20:35.139903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.024 [2024-07-15 21:20:35.139975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.024 [2024-07-15 21:20:35.139999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.024 [2024-07-15 21:20:35.140008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.024 [2024-07-15 21:20:35.140014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.024 [2024-07-15 21:20:35.140032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.024 qpair failed and we were unable to recover it. 00:30:08.024 [2024-07-15 21:20:35.149893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.024 [2024-07-15 21:20:35.149964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.024 [2024-07-15 21:20:35.149996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.024 [2024-07-15 21:20:35.150005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.024 [2024-07-15 21:20:35.150012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.024 [2024-07-15 21:20:35.150030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.024 qpair failed and we were unable to recover it. 00:30:08.024 [2024-07-15 21:20:35.159907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.024 [2024-07-15 21:20:35.159969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.024 [2024-07-15 21:20:35.159986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.024 [2024-07-15 21:20:35.159994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.024 [2024-07-15 21:20:35.160000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.024 [2024-07-15 21:20:35.160014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.024 qpair failed and we were unable to recover it. 00:30:08.024 [2024-07-15 21:20:35.169996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.024 [2024-07-15 21:20:35.170063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.024 [2024-07-15 21:20:35.170079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.024 [2024-07-15 21:20:35.170087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.024 [2024-07-15 21:20:35.170093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.024 [2024-07-15 21:20:35.170107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.024 qpair failed and we were unable to recover it. 00:30:08.024 [2024-07-15 21:20:35.180032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.024 [2024-07-15 21:20:35.180090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.024 [2024-07-15 21:20:35.180105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.024 [2024-07-15 21:20:35.180112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.024 [2024-07-15 21:20:35.180118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.024 [2024-07-15 21:20:35.180132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.024 qpair failed and we were unable to recover it. 00:30:08.024 [2024-07-15 21:20:35.190142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.024 [2024-07-15 21:20:35.190210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.024 [2024-07-15 21:20:35.190225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.024 [2024-07-15 21:20:35.190236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.024 [2024-07-15 21:20:35.190242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.024 [2024-07-15 21:20:35.190260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.024 qpair failed and we were unable to recover it. 00:30:08.024 [2024-07-15 21:20:35.200088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.024 [2024-07-15 21:20:35.200208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.024 [2024-07-15 21:20:35.200224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.024 [2024-07-15 21:20:35.200234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.024 [2024-07-15 21:20:35.200241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.024 [2024-07-15 21:20:35.200254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.024 qpair failed and we were unable to recover it. 00:30:08.024 [2024-07-15 21:20:35.210106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.024 [2024-07-15 21:20:35.210160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.024 [2024-07-15 21:20:35.210175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.024 [2024-07-15 21:20:35.210182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.024 [2024-07-15 21:20:35.210188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.024 [2024-07-15 21:20:35.210202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.024 qpair failed and we were unable to recover it. 00:30:08.024 [2024-07-15 21:20:35.220030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.024 [2024-07-15 21:20:35.220099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.024 [2024-07-15 21:20:35.220116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.024 [2024-07-15 21:20:35.220123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.024 [2024-07-15 21:20:35.220129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.024 [2024-07-15 21:20:35.220143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.024 qpair failed and we were unable to recover it. 00:30:08.024 [2024-07-15 21:20:35.230266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.024 [2024-07-15 21:20:35.230326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.024 [2024-07-15 21:20:35.230341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.024 [2024-07-15 21:20:35.230348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.024 [2024-07-15 21:20:35.230354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.024 [2024-07-15 21:20:35.230368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.024 qpair failed and we were unable to recover it. 00:30:08.024 [2024-07-15 21:20:35.240190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.024 [2024-07-15 21:20:35.240255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.024 [2024-07-15 21:20:35.240274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.024 [2024-07-15 21:20:35.240281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.024 [2024-07-15 21:20:35.240287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.024 [2024-07-15 21:20:35.240301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.024 qpair failed and we were unable to recover it. 00:30:08.024 [2024-07-15 21:20:35.250217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.024 [2024-07-15 21:20:35.250278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.024 [2024-07-15 21:20:35.250293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.024 [2024-07-15 21:20:35.250300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.024 [2024-07-15 21:20:35.250305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.024 [2024-07-15 21:20:35.250319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.024 qpair failed and we were unable to recover it. 00:30:08.024 [2024-07-15 21:20:35.260262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.024 [2024-07-15 21:20:35.260368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.024 [2024-07-15 21:20:35.260383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.024 [2024-07-15 21:20:35.260390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.024 [2024-07-15 21:20:35.260396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.025 [2024-07-15 21:20:35.260409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.025 qpair failed and we were unable to recover it. 00:30:08.025 [2024-07-15 21:20:35.270348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.025 [2024-07-15 21:20:35.270430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.025 [2024-07-15 21:20:35.270445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.025 [2024-07-15 21:20:35.270452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.025 [2024-07-15 21:20:35.270458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.025 [2024-07-15 21:20:35.270472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.025 qpair failed and we were unable to recover it. 00:30:08.025 [2024-07-15 21:20:35.280327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.025 [2024-07-15 21:20:35.280388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.025 [2024-07-15 21:20:35.280403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.025 [2024-07-15 21:20:35.280410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.025 [2024-07-15 21:20:35.280416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.025 [2024-07-15 21:20:35.280433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.025 qpair failed and we were unable to recover it. 00:30:08.025 [2024-07-15 21:20:35.290218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.025 [2024-07-15 21:20:35.290279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.025 [2024-07-15 21:20:35.290294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.025 [2024-07-15 21:20:35.290301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.025 [2024-07-15 21:20:35.290307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.025 [2024-07-15 21:20:35.290321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.025 qpair failed and we were unable to recover it. 00:30:08.025 [2024-07-15 21:20:35.300251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.025 [2024-07-15 21:20:35.300311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.025 [2024-07-15 21:20:35.300326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.025 [2024-07-15 21:20:35.300333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.025 [2024-07-15 21:20:35.300339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.025 [2024-07-15 21:20:35.300352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.025 qpair failed and we were unable to recover it. 00:30:08.025 [2024-07-15 21:20:35.310432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.025 [2024-07-15 21:20:35.310498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.025 [2024-07-15 21:20:35.310513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.025 [2024-07-15 21:20:35.310520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.025 [2024-07-15 21:20:35.310526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.025 [2024-07-15 21:20:35.310539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.025 qpair failed and we were unable to recover it. 00:30:08.288 [2024-07-15 21:20:35.320433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.288 [2024-07-15 21:20:35.320497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.288 [2024-07-15 21:20:35.320513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.288 [2024-07-15 21:20:35.320520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.288 [2024-07-15 21:20:35.320526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.288 [2024-07-15 21:20:35.320539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.288 qpair failed and we were unable to recover it. 00:30:08.288 [2024-07-15 21:20:35.330481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.288 [2024-07-15 21:20:35.330540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.288 [2024-07-15 21:20:35.330559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.288 [2024-07-15 21:20:35.330566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.288 [2024-07-15 21:20:35.330572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.288 [2024-07-15 21:20:35.330586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.288 qpair failed and we were unable to recover it. 00:30:08.288 [2024-07-15 21:20:35.340467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.288 [2024-07-15 21:20:35.340547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.288 [2024-07-15 21:20:35.340562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.288 [2024-07-15 21:20:35.340569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.288 [2024-07-15 21:20:35.340575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.288 [2024-07-15 21:20:35.340588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.288 qpair failed and we were unable to recover it. 00:30:08.288 [2024-07-15 21:20:35.350530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.288 [2024-07-15 21:20:35.350591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.288 [2024-07-15 21:20:35.350606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.288 [2024-07-15 21:20:35.350613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.288 [2024-07-15 21:20:35.350619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.288 [2024-07-15 21:20:35.350633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.288 qpair failed and we were unable to recover it. 00:30:08.288 [2024-07-15 21:20:35.360554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.288 [2024-07-15 21:20:35.360660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.288 [2024-07-15 21:20:35.360675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.288 [2024-07-15 21:20:35.360682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.288 [2024-07-15 21:20:35.360688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.288 [2024-07-15 21:20:35.360701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.288 qpair failed and we were unable to recover it. 00:30:08.288 [2024-07-15 21:20:35.370595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.288 [2024-07-15 21:20:35.370669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.288 [2024-07-15 21:20:35.370684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.288 [2024-07-15 21:20:35.370691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.288 [2024-07-15 21:20:35.370701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.288 [2024-07-15 21:20:35.370716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.288 qpair failed and we were unable to recover it. 00:30:08.288 [2024-07-15 21:20:35.380578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.288 [2024-07-15 21:20:35.380638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.288 [2024-07-15 21:20:35.380653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.288 [2024-07-15 21:20:35.380660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.288 [2024-07-15 21:20:35.380666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.288 [2024-07-15 21:20:35.380680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.288 qpair failed and we were unable to recover it. 00:30:08.288 [2024-07-15 21:20:35.390658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.288 [2024-07-15 21:20:35.390755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.288 [2024-07-15 21:20:35.390772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.288 [2024-07-15 21:20:35.390779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.288 [2024-07-15 21:20:35.390786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.288 [2024-07-15 21:20:35.390800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.288 qpair failed and we were unable to recover it. 00:30:08.288 [2024-07-15 21:20:35.400622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.288 [2024-07-15 21:20:35.400684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.288 [2024-07-15 21:20:35.400700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.288 [2024-07-15 21:20:35.400707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.289 [2024-07-15 21:20:35.400713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.289 [2024-07-15 21:20:35.400727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.289 qpair failed and we were unable to recover it. 00:30:08.289 [2024-07-15 21:20:35.410536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.289 [2024-07-15 21:20:35.410598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.289 [2024-07-15 21:20:35.410613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.289 [2024-07-15 21:20:35.410620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.289 [2024-07-15 21:20:35.410626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.289 [2024-07-15 21:20:35.410641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.289 qpair failed and we were unable to recover it. 00:30:08.289 [2024-07-15 21:20:35.420604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.289 [2024-07-15 21:20:35.420664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.289 [2024-07-15 21:20:35.420679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.289 [2024-07-15 21:20:35.420686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.289 [2024-07-15 21:20:35.420692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.289 [2024-07-15 21:20:35.420705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.289 qpair failed and we were unable to recover it. 00:30:08.289 [2024-07-15 21:20:35.430725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.289 [2024-07-15 21:20:35.430787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.289 [2024-07-15 21:20:35.430803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.289 [2024-07-15 21:20:35.430810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.289 [2024-07-15 21:20:35.430815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.289 [2024-07-15 21:20:35.430830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.289 qpair failed and we were unable to recover it. 00:30:08.289 [2024-07-15 21:20:35.440734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.289 [2024-07-15 21:20:35.440794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.289 [2024-07-15 21:20:35.440810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.289 [2024-07-15 21:20:35.440816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.289 [2024-07-15 21:20:35.440822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.289 [2024-07-15 21:20:35.440837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.289 qpair failed and we were unable to recover it. 00:30:08.289 [2024-07-15 21:20:35.450787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.289 [2024-07-15 21:20:35.450846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.289 [2024-07-15 21:20:35.450862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.289 [2024-07-15 21:20:35.450868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.289 [2024-07-15 21:20:35.450874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.289 [2024-07-15 21:20:35.450888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.289 qpair failed and we were unable to recover it. 00:30:08.289 [2024-07-15 21:20:35.460777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.289 [2024-07-15 21:20:35.460836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.289 [2024-07-15 21:20:35.460851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.289 [2024-07-15 21:20:35.460858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.289 [2024-07-15 21:20:35.460868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.289 [2024-07-15 21:20:35.460881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.289 qpair failed and we were unable to recover it. 00:30:08.289 [2024-07-15 21:20:35.470850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.289 [2024-07-15 21:20:35.470916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.289 [2024-07-15 21:20:35.470931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.289 [2024-07-15 21:20:35.470938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.289 [2024-07-15 21:20:35.470944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.289 [2024-07-15 21:20:35.470958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.289 qpair failed and we were unable to recover it. 00:30:08.289 [2024-07-15 21:20:35.480846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.289 [2024-07-15 21:20:35.480913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.289 [2024-07-15 21:20:35.480938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.289 [2024-07-15 21:20:35.480946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.289 [2024-07-15 21:20:35.480953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.289 [2024-07-15 21:20:35.480971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.289 qpair failed and we were unable to recover it. 00:30:08.289 [2024-07-15 21:20:35.490811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.289 [2024-07-15 21:20:35.490875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.289 [2024-07-15 21:20:35.490892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.289 [2024-07-15 21:20:35.490899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.289 [2024-07-15 21:20:35.490906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.289 [2024-07-15 21:20:35.490921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.289 qpair failed and we were unable to recover it. 00:30:08.289 [2024-07-15 21:20:35.500883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.289 [2024-07-15 21:20:35.500952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.289 [2024-07-15 21:20:35.500978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.289 [2024-07-15 21:20:35.500986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.289 [2024-07-15 21:20:35.500993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.289 [2024-07-15 21:20:35.501012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.289 qpair failed and we were unable to recover it. 00:30:08.289 [2024-07-15 21:20:35.510838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.289 [2024-07-15 21:20:35.510908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.289 [2024-07-15 21:20:35.510924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.289 [2024-07-15 21:20:35.510931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.289 [2024-07-15 21:20:35.510938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.289 [2024-07-15 21:20:35.510953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.289 qpair failed and we were unable to recover it. 00:30:08.289 [2024-07-15 21:20:35.520829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.289 [2024-07-15 21:20:35.520901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.289 [2024-07-15 21:20:35.520917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.289 [2024-07-15 21:20:35.520924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.289 [2024-07-15 21:20:35.520930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.289 [2024-07-15 21:20:35.520944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.289 qpair failed and we were unable to recover it. 00:30:08.289 [2024-07-15 21:20:35.530844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.289 [2024-07-15 21:20:35.530912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.289 [2024-07-15 21:20:35.530927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.289 [2024-07-15 21:20:35.530934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.289 [2024-07-15 21:20:35.530940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.289 [2024-07-15 21:20:35.530953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.289 qpair failed and we were unable to recover it. 00:30:08.289 [2024-07-15 21:20:35.540983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.289 [2024-07-15 21:20:35.541051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.289 [2024-07-15 21:20:35.541077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.289 [2024-07-15 21:20:35.541085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.289 [2024-07-15 21:20:35.541092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.289 [2024-07-15 21:20:35.541110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.290 qpair failed and we were unable to recover it. 00:30:08.290 [2024-07-15 21:20:35.551056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.290 [2024-07-15 21:20:35.551119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.290 [2024-07-15 21:20:35.551136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.290 [2024-07-15 21:20:35.551147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.290 [2024-07-15 21:20:35.551153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.290 [2024-07-15 21:20:35.551169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.290 qpair failed and we were unable to recover it. 00:30:08.290 [2024-07-15 21:20:35.561054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.290 [2024-07-15 21:20:35.561119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.290 [2024-07-15 21:20:35.561135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.290 [2024-07-15 21:20:35.561142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.290 [2024-07-15 21:20:35.561149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.290 [2024-07-15 21:20:35.561163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.290 qpair failed and we were unable to recover it. 00:30:08.290 [2024-07-15 21:20:35.571067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.290 [2024-07-15 21:20:35.571125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.290 [2024-07-15 21:20:35.571140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.290 [2024-07-15 21:20:35.571147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.290 [2024-07-15 21:20:35.571153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.290 [2024-07-15 21:20:35.571167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.290 qpair failed and we were unable to recover it. 00:30:08.552 [2024-07-15 21:20:35.580983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.552 [2024-07-15 21:20:35.581053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.552 [2024-07-15 21:20:35.581070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.552 [2024-07-15 21:20:35.581077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.552 [2024-07-15 21:20:35.581083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.552 [2024-07-15 21:20:35.581098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.552 qpair failed and we were unable to recover it. 00:30:08.552 [2024-07-15 21:20:35.591161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.552 [2024-07-15 21:20:35.591227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.552 [2024-07-15 21:20:35.591246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.552 [2024-07-15 21:20:35.591253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.552 [2024-07-15 21:20:35.591260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.552 [2024-07-15 21:20:35.591274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.552 qpair failed and we were unable to recover it. 00:30:08.552 [2024-07-15 21:20:35.601162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.552 [2024-07-15 21:20:35.601232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.552 [2024-07-15 21:20:35.601248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.552 [2024-07-15 21:20:35.601255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.552 [2024-07-15 21:20:35.601261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.552 [2024-07-15 21:20:35.601275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.552 qpair failed and we were unable to recover it. 00:30:08.552 [2024-07-15 21:20:35.611193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.552 [2024-07-15 21:20:35.611253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.552 [2024-07-15 21:20:35.611269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.552 [2024-07-15 21:20:35.611276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.552 [2024-07-15 21:20:35.611281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.552 [2024-07-15 21:20:35.611296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.552 qpair failed and we were unable to recover it. 00:30:08.552 [2024-07-15 21:20:35.621206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.552 [2024-07-15 21:20:35.621273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.552 [2024-07-15 21:20:35.621288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.552 [2024-07-15 21:20:35.621295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.552 [2024-07-15 21:20:35.621301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.552 [2024-07-15 21:20:35.621314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.552 qpair failed and we were unable to recover it. 00:30:08.552 [2024-07-15 21:20:35.631155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.552 [2024-07-15 21:20:35.631218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.552 [2024-07-15 21:20:35.631236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.552 [2024-07-15 21:20:35.631243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.552 [2024-07-15 21:20:35.631249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.552 [2024-07-15 21:20:35.631263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.552 qpair failed and we were unable to recover it. 00:30:08.552 [2024-07-15 21:20:35.641306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.552 [2024-07-15 21:20:35.641372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.552 [2024-07-15 21:20:35.641390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.552 [2024-07-15 21:20:35.641400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.552 [2024-07-15 21:20:35.641406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.552 [2024-07-15 21:20:35.641421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.552 qpair failed and we were unable to recover it. 00:30:08.552 [2024-07-15 21:20:35.651325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.552 [2024-07-15 21:20:35.651386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.552 [2024-07-15 21:20:35.651402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.552 [2024-07-15 21:20:35.651409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.552 [2024-07-15 21:20:35.651415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x240fa50 00:30:08.552 [2024-07-15 21:20:35.651429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.552 qpair failed and we were unable to recover it. 00:30:08.552 [2024-07-15 21:20:35.651580] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:08.552 A controller has encountered a failure and is being reset. 00:30:08.553 [2024-07-15 21:20:35.651690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241d750 (9): Bad file descriptor 00:30:08.553 Controller properly reset. 00:30:08.553 Initializing NVMe Controllers 00:30:08.553 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:08.553 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:08.553 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:08.553 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:08.553 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:08.553 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:08.553 Initialization complete. Launching workers. 00:30:08.553 Starting thread on core 1 00:30:08.553 Starting thread on core 2 00:30:08.553 Starting thread on core 3 00:30:08.553 Starting thread on core 0 00:30:08.553 21:20:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:08.553 00:30:08.553 real 0m11.358s 00:30:08.553 user 0m21.319s 00:30:08.553 sys 0m3.747s 00:30:08.553 21:20:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:08.553 21:20:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:08.553 ************************************ 00:30:08.553 END TEST nvmf_target_disconnect_tc2 00:30:08.553 ************************************ 00:30:08.553 21:20:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:30:08.553 21:20:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:08.553 21:20:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:08.553 21:20:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:08.553 21:20:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:08.553 21:20:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:30:08.553 21:20:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:08.553 21:20:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:30:08.553 21:20:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:08.553 21:20:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:08.553 rmmod nvme_tcp 00:30:08.553 rmmod nvme_fabrics 00:30:08.553 rmmod nvme_keyring 00:30:08.814 21:20:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:08.814 21:20:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:30:08.814 21:20:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:30:08.814 21:20:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2166281 ']' 00:30:08.814 21:20:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2166281 00:30:08.814 21:20:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2166281 ']' 00:30:08.814 21:20:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2166281 00:30:08.814 21:20:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:30:08.814 21:20:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:08.814 21:20:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2166281 00:30:08.814 21:20:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:30:08.814 21:20:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:30:08.814 21:20:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2166281' 00:30:08.814 killing process with pid 2166281 00:30:08.814 21:20:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2166281 00:30:08.814 21:20:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2166281 00:30:08.814 21:20:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:08.814 21:20:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:08.814 21:20:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:08.814 21:20:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:08.814 21:20:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:08.814 21:20:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.814 21:20:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:08.814 21:20:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.361 21:20:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:11.361 00:30:11.361 real 0m22.198s 00:30:11.361 user 0m49.112s 00:30:11.361 sys 0m10.184s 00:30:11.361 21:20:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:11.361 21:20:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:11.361 ************************************ 00:30:11.361 END TEST nvmf_target_disconnect 00:30:11.361 ************************************ 00:30:11.361 21:20:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:11.361 21:20:38 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:30:11.361 21:20:38 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:11.361 21:20:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:11.361 21:20:38 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:30:11.361 00:30:11.361 real 23m19.469s 00:30:11.361 user 47m1.770s 00:30:11.361 sys 7m39.801s 00:30:11.361 21:20:38 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:11.361 21:20:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:11.361 ************************************ 00:30:11.361 END TEST nvmf_tcp 00:30:11.361 ************************************ 00:30:11.361 21:20:38 -- common/autotest_common.sh@1142 -- # return 0 00:30:11.361 21:20:38 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:30:11.361 21:20:38 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:11.361 21:20:38 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:11.361 21:20:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:11.361 21:20:38 -- common/autotest_common.sh@10 -- # set +x 00:30:11.361 ************************************ 00:30:11.361 START TEST spdkcli_nvmf_tcp 00:30:11.361 ************************************ 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:11.361 * Looking for test storage... 00:30:11.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.361 21:20:38 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2168115 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2168115 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2168115 ']' 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:11.362 21:20:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:11.362 [2024-07-15 21:20:38.464536] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:30:11.362 [2024-07-15 21:20:38.464602] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2168115 ] 00:30:11.362 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.362 [2024-07-15 21:20:38.535245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:11.362 [2024-07-15 21:20:38.610225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.362 [2024-07-15 21:20:38.610234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.934 21:20:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:12.196 21:20:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:30:12.196 21:20:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:12.196 21:20:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:12.196 21:20:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:12.196 21:20:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:12.196 21:20:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:12.196 21:20:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:12.196 21:20:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:12.196 21:20:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:12.196 21:20:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:12.196 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:12.196 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:12.196 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:12.196 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:12.196 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:12.196 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:12.196 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:12.196 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:12.196 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:12.196 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:12.196 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:12.196 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:12.196 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:12.196 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:12.196 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:12.196 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:12.197 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:12.197 ' 00:30:14.740 [2024-07-15 21:20:41.850643] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:16.123 [2024-07-15 21:20:43.146787] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:18.665 [2024-07-15 21:20:45.561950] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:20.573 [2024-07-15 21:20:47.648165] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:21.953 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:21.953 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:21.953 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:21.953 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:21.953 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:21.953 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:21.953 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:21.953 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:21.953 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:21.953 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:21.953 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:21.953 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:21.953 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:21.953 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:21.953 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:21.953 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:21.953 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:21.953 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:21.953 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:21.953 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:21.953 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:21.953 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:21.953 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:21.953 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:21.953 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:21.953 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:21.953 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:21.953 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:22.213 21:20:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:22.213 21:20:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:22.213 21:20:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:22.213 21:20:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:22.213 21:20:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:22.213 21:20:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:22.213 21:20:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:22.213 21:20:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:22.473 21:20:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:22.473 21:20:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:22.473 21:20:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:22.473 21:20:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:22.473 21:20:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:22.734 21:20:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:22.734 21:20:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:22.734 21:20:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:22.734 21:20:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:22.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:22.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:22.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:22.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:22.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:22.734 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:22.734 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:22.734 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:22.734 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:22.734 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:22.734 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:22.734 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:22.734 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:22.734 ' 00:30:28.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:28.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:28.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:28.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:28.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:28.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:28.019 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:28.019 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:28.019 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:28.019 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:28.019 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:28.019 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:28.019 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:28.019 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:28.019 21:20:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:28.019 21:20:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:28.019 21:20:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:28.019 21:20:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2168115 00:30:28.019 21:20:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2168115 ']' 00:30:28.019 21:20:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2168115 00:30:28.019 21:20:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:30:28.019 21:20:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:28.019 21:20:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2168115 00:30:28.279 21:20:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:28.279 21:20:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:28.279 21:20:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2168115' 00:30:28.279 killing process with pid 2168115 00:30:28.279 21:20:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2168115 00:30:28.279 21:20:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2168115 00:30:28.279 21:20:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:28.279 21:20:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:28.279 21:20:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2168115 ']' 00:30:28.279 21:20:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2168115 00:30:28.279 21:20:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2168115 ']' 00:30:28.279 21:20:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2168115 00:30:28.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2168115) - No such process 00:30:28.279 21:20:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2168115 is not found' 00:30:28.279 Process with pid 2168115 is not found 00:30:28.279 21:20:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:28.279 21:20:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:28.279 21:20:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:28.279 00:30:28.279 real 0m17.169s 00:30:28.279 user 0m37.589s 00:30:28.279 sys 0m0.891s 00:30:28.279 21:20:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:28.279 21:20:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:28.279 ************************************ 00:30:28.279 END TEST spdkcli_nvmf_tcp 00:30:28.279 ************************************ 00:30:28.279 21:20:55 -- common/autotest_common.sh@1142 -- # return 0 00:30:28.279 21:20:55 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:28.279 21:20:55 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:28.279 21:20:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:28.279 21:20:55 -- common/autotest_common.sh@10 -- # set +x 00:30:28.279 ************************************ 00:30:28.279 START TEST nvmf_identify_passthru 00:30:28.279 ************************************ 00:30:28.279 21:20:55 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:28.540 * Looking for test storage... 00:30:28.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:28.540 21:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.540 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:28.540 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.540 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.540 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.540 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.540 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.540 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.540 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.540 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.540 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.540 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.540 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:28.540 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:28.540 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.540 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.540 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.540 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.540 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.540 21:20:55 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.540 21:20:55 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.540 21:20:55 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.540 21:20:55 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.540 21:20:55 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.540 21:20:55 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.540 21:20:55 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:28.541 21:20:55 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.541 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:28.541 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:28.541 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:28.541 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.541 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.541 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.541 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:28.541 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:28.541 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:28.541 21:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.541 21:20:55 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.541 21:20:55 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.541 21:20:55 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.541 21:20:55 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.541 21:20:55 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.541 21:20:55 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.541 21:20:55 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:28.541 21:20:55 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.541 21:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:28.541 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:28.541 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.541 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:28.541 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:28.541 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:28.541 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.541 21:20:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:28.541 21:20:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.541 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:28.541 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:28.541 21:20:55 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:28.541 21:20:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:36.751 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:36.751 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:36.751 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:36.751 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:36.751 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:36.752 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:36.752 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:36.752 Found net devices under 0000:31:00.0: cvl_0_0 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:36.752 Found net devices under 0000:31:00.1: cvl_0_1 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:36.752 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:36.753 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:36.753 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:36.753 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:36.753 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:36.753 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:36.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:36.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:30:36.753 00:30:36.753 --- 10.0.0.2 ping statistics --- 00:30:36.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.753 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:30:36.753 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:36.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:36.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:30:36.753 00:30:36.753 --- 10.0.0.1 ping statistics --- 00:30:36.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.753 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:30:36.753 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:36.753 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:36.753 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:36.753 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:36.753 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:36.753 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:36.753 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:36.753 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:36.753 21:21:03 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:36.753 21:21:03 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:36.753 21:21:03 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:36.753 21:21:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:36.753 21:21:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:36.753 21:21:03 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:36.753 21:21:03 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:36.753 21:21:03 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:36.753 21:21:03 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:36.753 21:21:03 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:36.753 21:21:03 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:36.753 21:21:03 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:36.753 21:21:03 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:36.753 21:21:03 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:36.753 21:21:04 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:36.753 21:21:04 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:30:36.753 21:21:04 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:30:36.753 21:21:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:30:36.753 21:21:04 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:30:36.753 21:21:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:36.753 21:21:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:36.753 21:21:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:37.013 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.273 21:21:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:30:37.273 21:21:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:37.273 21:21:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:37.273 21:21:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:37.273 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.845 21:21:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:30:37.845 21:21:04 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:37.845 21:21:04 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:37.845 21:21:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:37.845 21:21:05 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:37.845 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:37.845 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:37.845 21:21:05 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2175979 00:30:37.845 21:21:05 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:37.845 21:21:05 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:37.845 21:21:05 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2175979 00:30:37.845 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2175979 ']' 00:30:37.845 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.845 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:37.845 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.845 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:37.845 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:37.845 [2024-07-15 21:21:05.070220] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:30:37.845 [2024-07-15 21:21:05.070286] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:37.845 EAL: No free 2048 kB hugepages reported on node 1 00:30:38.106 [2024-07-15 21:21:05.148891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:38.106 [2024-07-15 21:21:05.220216] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:38.106 [2024-07-15 21:21:05.220261] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:38.106 [2024-07-15 21:21:05.220269] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:38.106 [2024-07-15 21:21:05.220275] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:38.106 [2024-07-15 21:21:05.220281] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:38.106 [2024-07-15 21:21:05.220469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:38.106 [2024-07-15 21:21:05.220656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:38.106 [2024-07-15 21:21:05.220814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.106 [2024-07-15 21:21:05.220814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:38.677 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:38.677 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:30:38.677 21:21:05 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:38.677 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.677 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:38.677 INFO: Log level set to 20 00:30:38.677 INFO: Requests: 00:30:38.677 { 00:30:38.677 "jsonrpc": "2.0", 00:30:38.677 "method": "nvmf_set_config", 00:30:38.677 "id": 1, 00:30:38.677 "params": { 00:30:38.677 "admin_cmd_passthru": { 00:30:38.677 "identify_ctrlr": true 00:30:38.677 } 00:30:38.677 } 00:30:38.677 } 00:30:38.677 00:30:38.677 INFO: response: 00:30:38.677 { 00:30:38.677 "jsonrpc": "2.0", 00:30:38.677 "id": 1, 00:30:38.677 "result": true 00:30:38.677 } 00:30:38.677 00:30:38.677 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.677 21:21:05 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:38.677 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.677 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:38.677 INFO: Setting log level to 20 00:30:38.677 INFO: Setting log level to 20 00:30:38.677 INFO: Log level set to 20 00:30:38.677 INFO: Log level set to 20 00:30:38.677 INFO: Requests: 00:30:38.677 { 00:30:38.677 "jsonrpc": "2.0", 00:30:38.677 "method": "framework_start_init", 00:30:38.677 "id": 1 00:30:38.677 } 00:30:38.677 00:30:38.677 INFO: Requests: 00:30:38.677 { 00:30:38.677 "jsonrpc": "2.0", 00:30:38.677 "method": "framework_start_init", 00:30:38.677 "id": 1 00:30:38.677 } 00:30:38.677 00:30:38.677 [2024-07-15 21:21:05.930658] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:38.677 INFO: response: 00:30:38.677 { 00:30:38.677 "jsonrpc": "2.0", 00:30:38.677 "id": 1, 00:30:38.677 "result": true 00:30:38.677 } 00:30:38.677 00:30:38.677 INFO: response: 00:30:38.677 { 00:30:38.677 "jsonrpc": "2.0", 00:30:38.677 "id": 1, 00:30:38.677 "result": true 00:30:38.677 } 00:30:38.677 00:30:38.677 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.677 21:21:05 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:38.677 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.677 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:38.677 INFO: Setting log level to 40 00:30:38.677 INFO: Setting log level to 40 00:30:38.677 INFO: Setting log level to 40 00:30:38.677 [2024-07-15 21:21:05.943980] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:38.677 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.677 21:21:05 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:38.677 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:38.677 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:38.938 21:21:05 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:30:38.938 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.938 21:21:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:39.199 Nvme0n1 00:30:39.199 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.199 21:21:06 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:39.199 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.199 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:39.199 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.199 21:21:06 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:39.199 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.199 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:39.199 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.199 21:21:06 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:39.199 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.199 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:39.199 [2024-07-15 21:21:06.333537] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.199 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.199 21:21:06 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:39.199 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.199 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:39.199 [ 00:30:39.199 { 00:30:39.199 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:39.199 "subtype": "Discovery", 00:30:39.199 "listen_addresses": [], 00:30:39.199 "allow_any_host": true, 00:30:39.199 "hosts": [] 00:30:39.199 }, 00:30:39.199 { 00:30:39.199 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:39.199 "subtype": "NVMe", 00:30:39.199 "listen_addresses": [ 00:30:39.199 { 00:30:39.199 "trtype": "TCP", 00:30:39.199 "adrfam": "IPv4", 00:30:39.199 "traddr": "10.0.0.2", 00:30:39.199 "trsvcid": "4420" 00:30:39.199 } 00:30:39.199 ], 00:30:39.199 "allow_any_host": true, 00:30:39.199 "hosts": [], 00:30:39.199 "serial_number": "SPDK00000000000001", 00:30:39.199 "model_number": "SPDK bdev Controller", 00:30:39.199 "max_namespaces": 1, 00:30:39.199 "min_cntlid": 1, 00:30:39.199 "max_cntlid": 65519, 00:30:39.199 "namespaces": [ 00:30:39.199 { 00:30:39.199 "nsid": 1, 00:30:39.199 "bdev_name": "Nvme0n1", 00:30:39.199 "name": "Nvme0n1", 00:30:39.199 "nguid": "3634473052605494002538450000002B", 00:30:39.199 "uuid": "36344730-5260-5494-0025-38450000002b" 00:30:39.199 } 00:30:39.199 ] 00:30:39.199 } 00:30:39.199 ] 00:30:39.199 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.199 21:21:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:39.199 21:21:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:39.199 21:21:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:39.199 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.459 21:21:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:30:39.459 21:21:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:39.459 21:21:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:39.459 21:21:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:39.459 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.459 21:21:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:30:39.719 21:21:06 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:30:39.719 21:21:06 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:30:39.719 21:21:06 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:39.719 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.719 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:39.719 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.719 21:21:06 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:39.719 21:21:06 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:39.719 21:21:06 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:39.719 21:21:06 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:39.719 21:21:06 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:39.719 21:21:06 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:39.719 21:21:06 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:39.719 21:21:06 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:39.719 rmmod nvme_tcp 00:30:39.719 rmmod nvme_fabrics 00:30:39.719 rmmod nvme_keyring 00:30:39.719 21:21:06 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:39.719 21:21:06 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:39.719 21:21:06 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:39.719 21:21:06 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2175979 ']' 00:30:39.719 21:21:06 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2175979 00:30:39.719 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2175979 ']' 00:30:39.719 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2175979 00:30:39.719 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:30:39.719 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:39.719 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2175979 00:30:39.719 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:39.719 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:39.719 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2175979' 00:30:39.719 killing process with pid 2175979 00:30:39.719 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2175979 00:30:39.719 21:21:06 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2175979 00:30:39.979 21:21:07 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:39.979 21:21:07 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:39.979 21:21:07 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:39.979 21:21:07 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:39.979 21:21:07 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:39.979 21:21:07 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.979 21:21:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:39.979 21:21:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.521 21:21:09 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:42.521 00:30:42.521 real 0m13.693s 00:30:42.521 user 0m10.342s 00:30:42.521 sys 0m6.794s 00:30:42.521 21:21:09 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:42.521 21:21:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:42.521 ************************************ 00:30:42.521 END TEST nvmf_identify_passthru 00:30:42.521 ************************************ 00:30:42.521 21:21:09 -- common/autotest_common.sh@1142 -- # return 0 00:30:42.521 21:21:09 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:42.521 21:21:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:42.521 21:21:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:42.521 21:21:09 -- common/autotest_common.sh@10 -- # set +x 00:30:42.521 ************************************ 00:30:42.521 START TEST nvmf_dif 00:30:42.521 ************************************ 00:30:42.521 21:21:09 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:42.521 * Looking for test storage... 00:30:42.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:42.521 21:21:09 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.521 21:21:09 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.521 21:21:09 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.521 21:21:09 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.521 21:21:09 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.521 21:21:09 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.521 21:21:09 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.521 21:21:09 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:42.521 21:21:09 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:42.521 21:21:09 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:42.521 21:21:09 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:42.521 21:21:09 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:42.521 21:21:09 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:42.521 21:21:09 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:42.521 21:21:09 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:42.522 21:21:09 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.522 21:21:09 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:42.522 21:21:09 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:42.522 21:21:09 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:42.522 21:21:09 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.522 21:21:09 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:42.522 21:21:09 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.522 21:21:09 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:42.522 21:21:09 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:42.522 21:21:09 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:42.522 21:21:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:50.662 21:21:17 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.662 21:21:17 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:50.662 21:21:17 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:50.662 21:21:17 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:50.662 21:21:17 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:50.662 21:21:17 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:50.662 21:21:17 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:50.662 21:21:17 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:50.662 21:21:17 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:50.662 21:21:17 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:50.662 21:21:17 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:50.662 21:21:17 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:50.662 21:21:17 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:50.662 21:21:17 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:50.662 21:21:17 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:50.662 21:21:17 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.662 21:21:17 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.662 21:21:17 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:50.663 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:50.663 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:50.663 Found net devices under 0000:31:00.0: cvl_0_0 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:50.663 Found net devices under 0000:31:00.1: cvl_0_1 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:50.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:30:50.663 00:30:50.663 --- 10.0.0.2 ping statistics --- 00:30:50.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.663 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:30:50.663 00:30:50.663 --- 10.0.0.1 ping statistics --- 00:30:50.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.663 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:50.663 21:21:17 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:53.966 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:53.966 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:53.966 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:53.966 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:53.966 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:53.966 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:53.966 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:53.966 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:53.966 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:53.966 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:53.966 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:53.966 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:53.966 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:53.966 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:53.966 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:53.966 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:53.966 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:53.966 21:21:21 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:53.966 21:21:21 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:53.966 21:21:21 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:53.966 21:21:21 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:53.966 21:21:21 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:53.966 21:21:21 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:53.966 21:21:21 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:53.966 21:21:21 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:53.966 21:21:21 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:53.966 21:21:21 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:53.966 21:21:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:53.966 21:21:21 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2183071 00:30:53.966 21:21:21 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2183071 00:30:53.966 21:21:21 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:53.966 21:21:21 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2183071 ']' 00:30:53.966 21:21:21 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.966 21:21:21 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:53.966 21:21:21 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.966 21:21:21 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:53.966 21:21:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:54.229 [2024-07-15 21:21:21.268586] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:30:54.229 [2024-07-15 21:21:21.268646] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:54.229 EAL: No free 2048 kB hugepages reported on node 1 00:30:54.229 [2024-07-15 21:21:21.350877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.229 [2024-07-15 21:21:21.423203] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:54.229 [2024-07-15 21:21:21.423249] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:54.229 [2024-07-15 21:21:21.423257] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:54.230 [2024-07-15 21:21:21.423263] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:54.230 [2024-07-15 21:21:21.423269] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:54.230 [2024-07-15 21:21:21.423291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.801 21:21:22 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:54.801 21:21:22 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:30:54.801 21:21:22 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:54.801 21:21:22 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:54.801 21:21:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:55.062 21:21:22 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:55.062 21:21:22 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:55.062 21:21:22 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:55.062 21:21:22 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.062 21:21:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:55.062 [2024-07-15 21:21:22.106142] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:55.062 21:21:22 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.062 21:21:22 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:55.062 21:21:22 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:55.062 21:21:22 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:55.062 21:21:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:55.062 ************************************ 00:30:55.062 START TEST fio_dif_1_default 00:30:55.062 ************************************ 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:55.062 bdev_null0 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:55.062 [2024-07-15 21:21:22.190479] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:55.062 21:21:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:55.062 { 00:30:55.062 "params": { 00:30:55.062 "name": "Nvme$subsystem", 00:30:55.062 "trtype": "$TEST_TRANSPORT", 00:30:55.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:55.063 "adrfam": "ipv4", 00:30:55.063 "trsvcid": "$NVMF_PORT", 00:30:55.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:55.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:55.063 "hdgst": ${hdgst:-false}, 00:30:55.063 "ddgst": ${ddgst:-false} 00:30:55.063 }, 00:30:55.063 "method": "bdev_nvme_attach_controller" 00:30:55.063 } 00:30:55.063 EOF 00:30:55.063 )") 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:55.063 "params": { 00:30:55.063 "name": "Nvme0", 00:30:55.063 "trtype": "tcp", 00:30:55.063 "traddr": "10.0.0.2", 00:30:55.063 "adrfam": "ipv4", 00:30:55.063 "trsvcid": "4420", 00:30:55.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:55.063 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:55.063 "hdgst": false, 00:30:55.063 "ddgst": false 00:30:55.063 }, 00:30:55.063 "method": "bdev_nvme_attach_controller" 00:30:55.063 }' 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:55.063 21:21:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.632 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:55.632 fio-3.35 00:30:55.632 Starting 1 thread 00:30:55.632 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.856 00:31:07.856 filename0: (groupid=0, jobs=1): err= 0: pid=2183601: Mon Jul 15 21:21:33 2024 00:31:07.856 read: IOPS=140, BW=561KiB/s (574kB/s)(5616KiB/10012msec) 00:31:07.856 slat (nsec): min=5391, max=32375, avg=5940.83, stdev=1341.12 00:31:07.856 clat (usec): min=766, max=43960, avg=28508.05, stdev=19183.59 00:31:07.856 lat (usec): min=772, max=43992, avg=28513.99, stdev=19183.88 00:31:07.856 clat percentiles (usec): 00:31:07.856 | 1.00th=[ 865], 5.00th=[ 971], 10.00th=[ 1037], 20.00th=[ 1057], 00:31:07.856 | 30.00th=[ 1106], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:31:07.856 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:07.856 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:31:07.856 | 99.99th=[43779] 00:31:07.856 bw ( KiB/s): min= 352, max= 768, per=99.83%, avg=560.00, stdev=176.95, samples=20 00:31:07.856 iops : min= 88, max= 192, avg=140.00, stdev=44.24, samples=20 00:31:07.856 lat (usec) : 1000=5.34% 00:31:07.856 lat (msec) : 2=27.42%, 50=67.24% 00:31:07.856 cpu : usr=95.34%, sys=4.47%, ctx=10, majf=0, minf=223 00:31:07.856 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:07.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.856 issued rwts: total=1404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.856 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:07.856 00:31:07.856 Run status group 0 (all jobs): 00:31:07.856 READ: bw=561KiB/s (574kB/s), 561KiB/s-561KiB/s (574kB/s-574kB/s), io=5616KiB (5751kB), run=10012-10012msec 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.856 00:31:07.856 real 0m11.216s 00:31:07.856 user 0m26.635s 00:31:07.856 sys 0m0.787s 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:07.856 ************************************ 00:31:07.856 END TEST fio_dif_1_default 00:31:07.856 ************************************ 00:31:07.856 21:21:33 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:07.856 21:21:33 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:07.856 21:21:33 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:07.856 21:21:33 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:07.856 21:21:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:07.856 ************************************ 00:31:07.856 START TEST fio_dif_1_multi_subsystems 00:31:07.856 ************************************ 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:07.856 bdev_null0 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:07.856 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:07.857 [2024-07-15 21:21:33.485244] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:07.857 bdev_null1 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:07.857 { 00:31:07.857 "params": { 00:31:07.857 "name": "Nvme$subsystem", 00:31:07.857 "trtype": "$TEST_TRANSPORT", 00:31:07.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:07.857 "adrfam": "ipv4", 00:31:07.857 "trsvcid": "$NVMF_PORT", 00:31:07.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:07.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:07.857 "hdgst": ${hdgst:-false}, 00:31:07.857 "ddgst": ${ddgst:-false} 00:31:07.857 }, 00:31:07.857 "method": "bdev_nvme_attach_controller" 00:31:07.857 } 00:31:07.857 EOF 00:31:07.857 )") 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:07.857 { 00:31:07.857 "params": { 00:31:07.857 "name": "Nvme$subsystem", 00:31:07.857 "trtype": "$TEST_TRANSPORT", 00:31:07.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:07.857 "adrfam": "ipv4", 00:31:07.857 "trsvcid": "$NVMF_PORT", 00:31:07.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:07.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:07.857 "hdgst": ${hdgst:-false}, 00:31:07.857 "ddgst": ${ddgst:-false} 00:31:07.857 }, 00:31:07.857 "method": "bdev_nvme_attach_controller" 00:31:07.857 } 00:31:07.857 EOF 00:31:07.857 )") 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:31:07.857 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:07.857 "params": { 00:31:07.857 "name": "Nvme0", 00:31:07.857 "trtype": "tcp", 00:31:07.857 "traddr": "10.0.0.2", 00:31:07.857 "adrfam": "ipv4", 00:31:07.857 "trsvcid": "4420", 00:31:07.857 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:07.857 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:07.857 "hdgst": false, 00:31:07.857 "ddgst": false 00:31:07.857 }, 00:31:07.857 "method": "bdev_nvme_attach_controller" 00:31:07.857 },{ 00:31:07.857 "params": { 00:31:07.857 "name": "Nvme1", 00:31:07.857 "trtype": "tcp", 00:31:07.857 "traddr": "10.0.0.2", 00:31:07.857 "adrfam": "ipv4", 00:31:07.858 "trsvcid": "4420", 00:31:07.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:07.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:07.858 "hdgst": false, 00:31:07.858 "ddgst": false 00:31:07.858 }, 00:31:07.858 "method": "bdev_nvme_attach_controller" 00:31:07.858 }' 00:31:07.858 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:07.858 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:07.858 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:07.858 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:07.858 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:07.858 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:07.858 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:07.858 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:07.858 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:07.858 21:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:07.858 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:07.858 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:07.858 fio-3.35 00:31:07.858 Starting 2 threads 00:31:07.858 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.851 00:31:17.851 filename0: (groupid=0, jobs=1): err= 0: pid=2186102: Mon Jul 15 21:21:44 2024 00:31:17.851 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10042msec) 00:31:17.851 slat (nsec): min=5402, max=46906, avg=6354.04, stdev=1996.44 00:31:17.851 clat (usec): min=40881, max=43062, avg=41996.80, stdev=198.62 00:31:17.851 lat (usec): min=40886, max=43067, avg=42003.15, stdev=198.65 00:31:17.851 clat percentiles (usec): 00:31:17.851 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:31:17.851 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:17.851 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:17.851 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:31:17.851 | 99.99th=[43254] 00:31:17.851 bw ( KiB/s): min= 352, max= 384, per=33.93%, avg=380.80, stdev= 9.85, samples=20 00:31:17.851 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:31:17.851 lat (msec) : 50=100.00% 00:31:17.851 cpu : usr=96.60%, sys=3.20%, ctx=14, majf=0, minf=199 00:31:17.851 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.851 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.851 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:17.851 filename1: (groupid=0, jobs=1): err= 0: pid=2186103: Mon Jul 15 21:21:44 2024 00:31:17.851 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10014msec) 00:31:17.851 slat (nsec): min=5414, max=39403, avg=6306.98, stdev=1380.41 00:31:17.851 clat (usec): min=803, max=42392, avg=21563.21, stdev=20434.95 00:31:17.851 lat (usec): min=808, max=42431, avg=21569.52, stdev=20434.92 00:31:17.851 clat percentiles (usec): 00:31:17.851 | 1.00th=[ 873], 5.00th=[ 1004], 10.00th=[ 1020], 20.00th=[ 1037], 00:31:17.851 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[41157], 60.00th=[41681], 00:31:17.851 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:31:17.851 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:17.851 | 99.99th=[42206] 00:31:17.851 bw ( KiB/s): min= 672, max= 768, per=66.07%, avg=740.80, stdev=34.86, samples=20 00:31:17.851 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:31:17.851 lat (usec) : 1000=4.58% 00:31:17.852 lat (msec) : 2=45.20%, 50=50.22% 00:31:17.852 cpu : usr=96.62%, sys=3.18%, ctx=16, majf=0, minf=95 00:31:17.852 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.852 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.852 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:17.852 00:31:17.852 Run status group 0 (all jobs): 00:31:17.852 READ: bw=1120KiB/s (1147kB/s), 381KiB/s-741KiB/s (390kB/s-759kB/s), io=11.0MiB (11.5MB), run=10014-10042msec 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.852 00:31:17.852 real 0m11.430s 00:31:17.852 user 0m35.271s 00:31:17.852 sys 0m1.011s 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:17.852 21:21:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:17.852 ************************************ 00:31:17.852 END TEST fio_dif_1_multi_subsystems 00:31:17.852 ************************************ 00:31:17.852 21:21:44 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:17.852 21:21:44 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:17.852 21:21:44 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:17.852 21:21:44 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:17.852 21:21:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:17.852 ************************************ 00:31:17.852 START TEST fio_dif_rand_params 00:31:17.852 ************************************ 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.852 bdev_null0 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.852 [2024-07-15 21:21:44.996103] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.852 21:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:17.852 { 00:31:17.852 "params": { 00:31:17.852 "name": "Nvme$subsystem", 00:31:17.852 "trtype": "$TEST_TRANSPORT", 00:31:17.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:17.852 "adrfam": "ipv4", 00:31:17.852 "trsvcid": "$NVMF_PORT", 00:31:17.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:17.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:17.852 "hdgst": ${hdgst:-false}, 00:31:17.852 "ddgst": ${ddgst:-false} 00:31:17.852 }, 00:31:17.852 "method": "bdev_nvme_attach_controller" 00:31:17.852 } 00:31:17.852 EOF 00:31:17.852 )") 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:17.852 "params": { 00:31:17.852 "name": "Nvme0", 00:31:17.852 "trtype": "tcp", 00:31:17.852 "traddr": "10.0.0.2", 00:31:17.852 "adrfam": "ipv4", 00:31:17.852 "trsvcid": "4420", 00:31:17.852 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:17.852 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:17.852 "hdgst": false, 00:31:17.852 "ddgst": false 00:31:17.852 }, 00:31:17.852 "method": "bdev_nvme_attach_controller" 00:31:17.852 }' 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:17.852 21:21:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:18.419 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:18.419 ... 00:31:18.419 fio-3.35 00:31:18.419 Starting 3 threads 00:31:18.419 EAL: No free 2048 kB hugepages reported on node 1 00:31:25.000 00:31:25.000 filename0: (groupid=0, jobs=1): err= 0: pid=2188309: Mon Jul 15 21:21:51 2024 00:31:25.000 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(126MiB/5046msec) 00:31:25.000 slat (nsec): min=5510, max=31870, avg=7821.70, stdev=1784.86 00:31:25.000 clat (usec): min=5632, max=92655, avg=14991.99, stdev=14147.41 00:31:25.000 lat (usec): min=5640, max=92664, avg=14999.81, stdev=14147.45 00:31:25.000 clat percentiles (usec): 00:31:25.000 | 1.00th=[ 5932], 5.00th=[ 6521], 10.00th=[ 7308], 20.00th=[ 8586], 00:31:25.000 | 30.00th=[ 9241], 40.00th=[10159], 50.00th=[10945], 60.00th=[11600], 00:31:25.000 | 70.00th=[12387], 80.00th=[13566], 90.00th=[17433], 95.00th=[51643], 00:31:25.000 | 99.00th=[89654], 99.50th=[90702], 99.90th=[92799], 99.95th=[92799], 00:31:25.000 | 99.99th=[92799] 00:31:25.000 bw ( KiB/s): min=14848, max=35584, per=34.77%, avg=25681.10, stdev=6375.86, samples=10 00:31:25.000 iops : min= 116, max= 278, avg=200.60, stdev=49.84, samples=10 00:31:25.000 lat (msec) : 10=38.67%, 20=51.49%, 50=2.88%, 100=6.96% 00:31:25.000 cpu : usr=96.02%, sys=3.71%, ctx=11, majf=0, minf=73 00:31:25.000 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:25.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.000 issued rwts: total=1006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.000 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:25.000 filename0: (groupid=0, jobs=1): err= 0: pid=2188310: Mon Jul 15 21:21:51 2024 00:31:25.000 read: IOPS=191, BW=24.0MiB/s (25.1MB/s)(120MiB/5005msec) 00:31:25.000 slat (nsec): min=5453, max=29916, avg=7804.78, stdev=1592.47 00:31:25.000 clat (usec): min=5851, max=93293, avg=15626.95, stdev=14622.78 00:31:25.000 lat (usec): min=5859, max=93302, avg=15634.76, stdev=14623.08 00:31:25.000 clat percentiles (usec): 00:31:25.000 | 1.00th=[ 6128], 5.00th=[ 6521], 10.00th=[ 7242], 20.00th=[ 8225], 00:31:25.000 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10421], 60.00th=[11600], 00:31:25.000 | 70.00th=[12649], 80.00th=[14353], 90.00th=[49021], 95.00th=[51643], 00:31:25.000 | 99.00th=[54264], 99.50th=[90702], 99.90th=[92799], 99.95th=[92799], 00:31:25.000 | 99.99th=[92799] 00:31:25.000 bw ( KiB/s): min=15360, max=33536, per=33.17%, avg=24499.20, stdev=6536.83, samples=10 00:31:25.000 iops : min= 120, max= 262, avg=191.40, stdev=51.07, samples=10 00:31:25.000 lat (msec) : 10=43.65%, 20=43.85%, 50=4.69%, 100=7.81% 00:31:25.000 cpu : usr=96.22%, sys=3.50%, ctx=12, majf=0, minf=108 00:31:25.000 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:25.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.000 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.000 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:25.000 filename0: (groupid=0, jobs=1): err= 0: pid=2188311: Mon Jul 15 21:21:51 2024 00:31:25.000 read: IOPS=187, BW=23.5MiB/s (24.6MB/s)(118MiB/5038msec) 00:31:25.000 slat (nsec): min=5420, max=32526, avg=8053.38, stdev=1773.84 00:31:25.000 clat (usec): min=5249, max=90365, avg=15962.59, stdev=15621.23 00:31:25.000 lat (usec): min=5257, max=90370, avg=15970.64, stdev=15621.09 00:31:25.000 clat percentiles (usec): 00:31:25.000 | 1.00th=[ 5800], 5.00th=[ 6194], 10.00th=[ 6718], 20.00th=[ 7504], 00:31:25.000 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 9503], 60.00th=[10290], 00:31:25.000 | 70.00th=[11207], 80.00th=[12911], 90.00th=[49546], 95.00th=[50594], 00:31:25.000 | 99.00th=[53216], 99.50th=[53740], 99.90th=[90702], 99.95th=[90702], 00:31:25.000 | 99.99th=[90702] 00:31:25.000 bw ( KiB/s): min=18688, max=29696, per=32.68%, avg=24140.80, stdev=4180.55, samples=10 00:31:25.000 iops : min= 146, max= 232, avg=188.60, stdev=32.66, samples=10 00:31:25.000 lat (msec) : 10=56.24%, 20=27.17%, 50=9.83%, 100=6.77% 00:31:25.000 cpu : usr=96.64%, sys=3.08%, ctx=9, majf=0, minf=110 00:31:25.000 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:25.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.000 issued rwts: total=946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.000 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:25.000 00:31:25.000 Run status group 0 (all jobs): 00:31:25.000 READ: bw=72.1MiB/s (75.6MB/s), 23.5MiB/s-24.9MiB/s (24.6MB/s-26.1MB/s), io=364MiB (382MB), run=5005-5046msec 00:31:25.000 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:25.000 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:25.000 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:25.000 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:25.000 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.001 bdev_null0 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.001 [2024-07-15 21:21:51.239337] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.001 bdev_null1 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.001 bdev_null2 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:25.001 { 00:31:25.001 "params": { 00:31:25.001 "name": "Nvme$subsystem", 00:31:25.001 "trtype": "$TEST_TRANSPORT", 00:31:25.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.001 "adrfam": "ipv4", 00:31:25.001 "trsvcid": "$NVMF_PORT", 00:31:25.001 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.001 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.001 "hdgst": ${hdgst:-false}, 00:31:25.001 "ddgst": ${ddgst:-false} 00:31:25.001 }, 00:31:25.001 "method": "bdev_nvme_attach_controller" 00:31:25.001 } 00:31:25.001 EOF 00:31:25.001 )") 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:25.001 { 00:31:25.001 "params": { 00:31:25.001 "name": "Nvme$subsystem", 00:31:25.001 "trtype": "$TEST_TRANSPORT", 00:31:25.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.001 "adrfam": "ipv4", 00:31:25.001 "trsvcid": "$NVMF_PORT", 00:31:25.001 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.001 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.001 "hdgst": ${hdgst:-false}, 00:31:25.001 "ddgst": ${ddgst:-false} 00:31:25.001 }, 00:31:25.001 "method": "bdev_nvme_attach_controller" 00:31:25.001 } 00:31:25.001 EOF 00:31:25.001 )") 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:25.001 21:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:25.001 { 00:31:25.002 "params": { 00:31:25.002 "name": "Nvme$subsystem", 00:31:25.002 "trtype": "$TEST_TRANSPORT", 00:31:25.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.002 "adrfam": "ipv4", 00:31:25.002 "trsvcid": "$NVMF_PORT", 00:31:25.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.002 "hdgst": ${hdgst:-false}, 00:31:25.002 "ddgst": ${ddgst:-false} 00:31:25.002 }, 00:31:25.002 "method": "bdev_nvme_attach_controller" 00:31:25.002 } 00:31:25.002 EOF 00:31:25.002 )") 00:31:25.002 21:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:25.002 21:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:25.002 21:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:25.002 21:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:25.002 "params": { 00:31:25.002 "name": "Nvme0", 00:31:25.002 "trtype": "tcp", 00:31:25.002 "traddr": "10.0.0.2", 00:31:25.002 "adrfam": "ipv4", 00:31:25.002 "trsvcid": "4420", 00:31:25.002 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:25.002 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:25.002 "hdgst": false, 00:31:25.002 "ddgst": false 00:31:25.002 }, 00:31:25.002 "method": "bdev_nvme_attach_controller" 00:31:25.002 },{ 00:31:25.002 "params": { 00:31:25.002 "name": "Nvme1", 00:31:25.002 "trtype": "tcp", 00:31:25.002 "traddr": "10.0.0.2", 00:31:25.002 "adrfam": "ipv4", 00:31:25.002 "trsvcid": "4420", 00:31:25.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:25.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:25.002 "hdgst": false, 00:31:25.002 "ddgst": false 00:31:25.002 }, 00:31:25.002 "method": "bdev_nvme_attach_controller" 00:31:25.002 },{ 00:31:25.002 "params": { 00:31:25.002 "name": "Nvme2", 00:31:25.002 "trtype": "tcp", 00:31:25.002 "traddr": "10.0.0.2", 00:31:25.002 "adrfam": "ipv4", 00:31:25.002 "trsvcid": "4420", 00:31:25.002 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:25.002 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:25.002 "hdgst": false, 00:31:25.002 "ddgst": false 00:31:25.002 }, 00:31:25.002 "method": "bdev_nvme_attach_controller" 00:31:25.002 }' 00:31:25.002 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:25.002 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:25.002 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:25.002 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:25.002 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:25.002 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:25.002 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:25.002 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:25.002 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:25.002 21:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:25.002 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:25.002 ... 00:31:25.002 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:25.002 ... 00:31:25.002 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:25.002 ... 00:31:25.002 fio-3.35 00:31:25.002 Starting 24 threads 00:31:25.002 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.221 00:31:37.221 filename0: (groupid=0, jobs=1): err= 0: pid=2189810: Mon Jul 15 21:22:02 2024 00:31:37.221 read: IOPS=496, BW=1984KiB/s (2032kB/s)(19.4MiB/10030msec) 00:31:37.221 slat (usec): min=5, max=114, avg=15.58, stdev=12.99 00:31:37.221 clat (usec): min=6302, max=55343, avg=32126.29, stdev=3285.39 00:31:37.221 lat (usec): min=6314, max=55357, avg=32141.87, stdev=3283.97 00:31:37.221 clat percentiles (usec): 00:31:37.221 | 1.00th=[ 8717], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:31:37.221 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:37.221 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:31:37.221 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[42206], 00:31:37.221 | 99.99th=[55313] 00:31:37.221 bw ( KiB/s): min= 1916, max= 2308, per=4.20%, avg=1984.00, stdev=98.23, samples=20 00:31:37.221 iops : min= 479, max= 577, avg=496.00, stdev=24.56, samples=20 00:31:37.221 lat (msec) : 10=1.29%, 20=1.00%, 50=97.67%, 100=0.04% 00:31:37.221 cpu : usr=99.23%, sys=0.50%, ctx=16, majf=0, minf=49 00:31:37.221 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:37.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.221 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.221 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.221 filename0: (groupid=0, jobs=1): err= 0: pid=2189811: Mon Jul 15 21:22:02 2024 00:31:37.221 read: IOPS=508, BW=2034KiB/s (2083kB/s)(19.9MiB/10028msec) 00:31:37.221 slat (usec): min=5, max=112, avg= 7.60, stdev= 5.25 00:31:37.221 clat (usec): min=4234, max=43078, avg=31396.46, stdev=4299.57 00:31:37.221 lat (usec): min=4249, max=43085, avg=31404.06, stdev=4298.33 00:31:37.221 clat percentiles (usec): 00:31:37.221 | 1.00th=[ 8586], 5.00th=[21365], 10.00th=[28443], 20.00th=[31589], 00:31:37.221 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:37.221 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:31:37.221 | 99.00th=[35390], 99.50th=[35390], 99.90th=[42206], 99.95th=[42206], 00:31:37.221 | 99.99th=[43254] 00:31:37.221 bw ( KiB/s): min= 1920, max= 2448, per=4.30%, avg=2033.35, stdev=157.17, samples=20 00:31:37.221 iops : min= 480, max= 612, avg=508.30, stdev=39.29, samples=20 00:31:37.221 lat (msec) : 10=1.22%, 20=2.90%, 50=95.88% 00:31:37.221 cpu : usr=99.02%, sys=0.67%, ctx=73, majf=0, minf=62 00:31:37.221 IO depths : 1=5.6%, 2=11.6%, 4=24.2%, 8=51.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:31:37.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.221 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.221 issued rwts: total=5100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.221 filename0: (groupid=0, jobs=1): err= 0: pid=2189812: Mon Jul 15 21:22:02 2024 00:31:37.221 read: IOPS=495, BW=1981KiB/s (2028kB/s)(19.4MiB/10016msec) 00:31:37.221 slat (nsec): min=5580, max=89287, avg=11128.12, stdev=8911.43 00:31:37.221 clat (usec): min=8473, max=50938, avg=32218.24, stdev=2808.23 00:31:37.221 lat (usec): min=8514, max=50945, avg=32229.36, stdev=2807.97 00:31:37.221 clat percentiles (usec): 00:31:37.221 | 1.00th=[17171], 5.00th=[31065], 10.00th=[31589], 20.00th=[32113], 00:31:37.221 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:37.221 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:31:37.221 | 99.00th=[35390], 99.50th=[35390], 99.90th=[44303], 99.95th=[49546], 00:31:37.221 | 99.99th=[51119] 00:31:37.221 bw ( KiB/s): min= 1920, max= 2048, per=4.19%, avg=1980.63, stdev=65.66, samples=19 00:31:37.221 iops : min= 480, max= 512, avg=495.16, stdev=16.42, samples=19 00:31:37.221 lat (msec) : 10=0.32%, 20=1.65%, 50=97.98%, 100=0.04% 00:31:37.221 cpu : usr=98.82%, sys=0.85%, ctx=105, majf=0, minf=54 00:31:37.221 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:37.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.221 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.221 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.221 filename0: (groupid=0, jobs=1): err= 0: pid=2189813: Mon Jul 15 21:22:02 2024 00:31:37.221 read: IOPS=493, BW=1974KiB/s (2022kB/s)(19.3MiB/10017msec) 00:31:37.221 slat (usec): min=5, max=141, avg=19.30, stdev=16.22 00:31:37.221 clat (usec): min=15393, max=41416, avg=32253.46, stdev=2221.59 00:31:37.221 lat (usec): min=15400, max=41426, avg=32272.76, stdev=2222.45 00:31:37.221 clat percentiles (usec): 00:31:37.221 | 1.00th=[19530], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:31:37.221 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:37.221 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:31:37.221 | 99.00th=[34866], 99.50th=[35390], 99.90th=[38011], 99.95th=[38011], 00:31:37.221 | 99.99th=[41157] 00:31:37.221 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1973.63, stdev=64.62, samples=19 00:31:37.221 iops : min= 480, max= 512, avg=493.37, stdev=16.11, samples=19 00:31:37.221 lat (msec) : 20=1.54%, 50=98.46% 00:31:37.221 cpu : usr=98.74%, sys=0.74%, ctx=25, majf=0, minf=46 00:31:37.221 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:37.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.221 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.221 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.221 filename0: (groupid=0, jobs=1): err= 0: pid=2189814: Mon Jul 15 21:22:02 2024 00:31:37.221 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10003msec) 00:31:37.221 slat (usec): min=5, max=100, avg=20.03, stdev=16.02 00:31:37.221 clat (usec): min=3945, max=79897, avg=32595.24, stdev=4577.43 00:31:37.221 lat (usec): min=3951, max=79913, avg=32615.27, stdev=4577.58 00:31:37.221 clat percentiles (usec): 00:31:37.221 | 1.00th=[18220], 5.00th=[24773], 10.00th=[30540], 20.00th=[31851], 00:31:37.221 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:37.221 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34866], 95.00th=[39584], 00:31:37.221 | 99.00th=[47449], 99.50th=[51119], 99.90th=[60556], 99.95th=[60556], 00:31:37.221 | 99.99th=[80217] 00:31:37.221 bw ( KiB/s): min= 1772, max= 2048, per=4.11%, avg=1943.37, stdev=69.54, samples=19 00:31:37.221 iops : min= 443, max= 512, avg=485.84, stdev=17.39, samples=19 00:31:37.222 lat (msec) : 4=0.12%, 10=0.20%, 20=1.49%, 50=97.48%, 100=0.70% 00:31:37.222 cpu : usr=99.03%, sys=0.66%, ctx=30, majf=0, minf=57 00:31:37.222 IO depths : 1=1.0%, 2=3.5%, 4=13.3%, 8=69.2%, 16=13.1%, 32=0.0%, >=64=0.0% 00:31:37.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.222 complete : 0=0.0%, 4=91.9%, 8=3.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.222 issued rwts: total=4890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.222 filename0: (groupid=0, jobs=1): err= 0: pid=2189815: Mon Jul 15 21:22:02 2024 00:31:37.222 read: IOPS=491, BW=1967KiB/s (2014kB/s)(19.2MiB/10005msec) 00:31:37.222 slat (usec): min=5, max=105, avg=21.91, stdev=15.93 00:31:37.222 clat (usec): min=14088, max=66436, avg=32339.82, stdev=3899.29 00:31:37.222 lat (usec): min=14115, max=66451, avg=32361.73, stdev=3899.72 00:31:37.222 clat percentiles (usec): 00:31:37.222 | 1.00th=[19530], 5.00th=[25560], 10.00th=[31065], 20.00th=[31589], 00:31:37.222 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:31:37.222 | 70.00th=[32900], 80.00th=[33162], 90.00th=[34341], 95.00th=[35390], 00:31:37.222 | 99.00th=[49546], 99.50th=[53740], 99.90th=[66323], 99.95th=[66323], 00:31:37.222 | 99.99th=[66323] 00:31:37.222 bw ( KiB/s): min= 1712, max= 2192, per=4.15%, avg=1963.79, stdev=104.08, samples=19 00:31:37.222 iops : min= 428, max= 548, avg=490.95, stdev=26.02, samples=19 00:31:37.222 lat (msec) : 20=1.63%, 50=97.48%, 100=0.89% 00:31:37.222 cpu : usr=99.15%, sys=0.53%, ctx=28, majf=0, minf=66 00:31:37.222 IO depths : 1=4.6%, 2=9.3%, 4=20.7%, 8=57.5%, 16=8.0%, 32=0.0%, >=64=0.0% 00:31:37.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.222 complete : 0=0.0%, 4=93.0%, 8=1.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.222 issued rwts: total=4920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.222 filename0: (groupid=0, jobs=1): err= 0: pid=2189816: Mon Jul 15 21:22:02 2024 00:31:37.222 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10007msec) 00:31:37.222 slat (nsec): min=5472, max=87401, avg=21146.67, stdev=13347.61 00:31:37.222 clat (usec): min=9469, max=59405, avg=32491.82, stdev=2263.83 00:31:37.222 lat (usec): min=9475, max=59420, avg=32512.97, stdev=2263.88 00:31:37.222 clat percentiles (usec): 00:31:37.222 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:37.222 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:31:37.222 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:31:37.222 | 99.00th=[34866], 99.50th=[35390], 99.90th=[59507], 99.95th=[59507], 00:31:37.222 | 99.99th=[59507] 00:31:37.222 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1946.68, stdev=68.12, samples=19 00:31:37.222 iops : min= 448, max= 512, avg=486.63, stdev=16.97, samples=19 00:31:37.222 lat (msec) : 10=0.29%, 20=0.04%, 50=99.35%, 100=0.33% 00:31:37.222 cpu : usr=98.19%, sys=0.99%, ctx=787, majf=0, minf=47 00:31:37.222 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:37.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.222 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.222 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.222 filename0: (groupid=0, jobs=1): err= 0: pid=2189817: Mon Jul 15 21:22:02 2024 00:31:37.222 read: IOPS=490, BW=1964KiB/s (2011kB/s)(19.2MiB/10022msec) 00:31:37.222 slat (usec): min=5, max=116, avg=23.97, stdev=16.99 00:31:37.222 clat (usec): min=15990, max=54628, avg=32359.53, stdev=2753.68 00:31:37.222 lat (usec): min=16001, max=54663, avg=32383.50, stdev=2754.33 00:31:37.222 clat percentiles (usec): 00:31:37.222 | 1.00th=[20841], 5.00th=[30802], 10.00th=[31327], 20.00th=[31851], 00:31:37.222 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:31:37.222 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:31:37.222 | 99.00th=[40109], 99.50th=[53216], 99.90th=[54789], 99.95th=[54789], 00:31:37.222 | 99.99th=[54789] 00:31:37.222 bw ( KiB/s): min= 1792, max= 2192, per=4.14%, avg=1957.05, stdev=92.08, samples=19 00:31:37.222 iops : min= 448, max= 548, avg=489.26, stdev=23.02, samples=19 00:31:37.222 lat (msec) : 20=0.20%, 50=99.11%, 100=0.69% 00:31:37.222 cpu : usr=99.19%, sys=0.51%, ctx=9, majf=0, minf=46 00:31:37.222 IO depths : 1=5.8%, 2=11.8%, 4=24.1%, 8=51.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:37.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.222 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.222 issued rwts: total=4920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.222 filename1: (groupid=0, jobs=1): err= 0: pid=2189818: Mon Jul 15 21:22:02 2024 00:31:37.222 read: IOPS=493, BW=1975KiB/s (2023kB/s)(19.3MiB/10003msec) 00:31:37.222 slat (nsec): min=5590, max=99913, avg=18031.34, stdev=13625.48 00:31:37.222 clat (usec): min=12440, max=51925, avg=32243.14, stdev=2458.84 00:31:37.222 lat (usec): min=12447, max=51980, avg=32261.17, stdev=2459.59 00:31:37.222 clat percentiles (usec): 00:31:37.222 | 1.00th=[19530], 5.00th=[30802], 10.00th=[31589], 20.00th=[31851], 00:31:37.222 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:37.222 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:31:37.222 | 99.00th=[38011], 99.50th=[38536], 99.90th=[51119], 99.95th=[51119], 00:31:37.222 | 99.99th=[52167] 00:31:37.222 bw ( KiB/s): min= 1916, max= 2224, per=4.17%, avg=1972.00, stdev=83.43, samples=19 00:31:37.222 iops : min= 479, max= 556, avg=493.00, stdev=20.86, samples=19 00:31:37.222 lat (msec) : 20=1.15%, 50=98.64%, 100=0.20% 00:31:37.222 cpu : usr=99.06%, sys=0.58%, ctx=48, majf=0, minf=63 00:31:37.222 IO depths : 1=5.7%, 2=11.7%, 4=24.1%, 8=51.7%, 16=6.8%, 32=0.0%, >=64=0.0% 00:31:37.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.222 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.222 issued rwts: total=4940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.222 filename1: (groupid=0, jobs=1): err= 0: pid=2189819: Mon Jul 15 21:22:02 2024 00:31:37.222 read: IOPS=491, BW=1964KiB/s (2011kB/s)(19.2MiB/10004msec) 00:31:37.222 slat (usec): min=5, max=111, avg=23.27, stdev=17.18 00:31:37.222 clat (usec): min=8483, max=35957, avg=32373.60, stdev=1954.47 00:31:37.222 lat (usec): min=8512, max=36008, avg=32396.86, stdev=1953.73 00:31:37.222 clat percentiles (usec): 00:31:37.222 | 1.00th=[30278], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:37.222 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:37.222 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:31:37.222 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:31:37.222 | 99.99th=[35914] 00:31:37.222 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=1960.42, stdev=61.13, samples=19 00:31:37.222 iops : min= 480, max= 512, avg=490.11, stdev=15.28, samples=19 00:31:37.222 lat (msec) : 10=0.33%, 20=0.65%, 50=99.02% 00:31:37.222 cpu : usr=99.19%, sys=0.52%, ctx=12, majf=0, minf=51 00:31:37.222 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:37.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.222 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.222 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.222 filename1: (groupid=0, jobs=1): err= 0: pid=2189820: Mon Jul 15 21:22:02 2024 00:31:37.222 read: IOPS=489, BW=1958KiB/s (2005kB/s)(19.1MiB/10003msec) 00:31:37.222 slat (nsec): min=5741, max=85384, avg=19265.52, stdev=11619.38 00:31:37.222 clat (usec): min=9365, max=55188, avg=32508.78, stdev=2092.15 00:31:37.222 lat (usec): min=9371, max=55205, avg=32528.05, stdev=2092.30 00:31:37.222 clat percentiles (usec): 00:31:37.222 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:37.222 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:37.222 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:31:37.222 | 99.00th=[34866], 99.50th=[35390], 99.90th=[55313], 99.95th=[55313], 00:31:37.222 | 99.99th=[55313] 00:31:37.222 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1946.95, stdev=68.52, samples=19 00:31:37.222 iops : min= 448, max= 512, avg=486.74, stdev=17.13, samples=19 00:31:37.222 lat (msec) : 10=0.20%, 20=0.12%, 50=99.35%, 100=0.33% 00:31:37.222 cpu : usr=99.18%, sys=0.53%, ctx=10, majf=0, minf=49 00:31:37.222 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:37.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.222 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.222 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.222 filename1: (groupid=0, jobs=1): err= 0: pid=2189821: Mon Jul 15 21:22:02 2024 00:31:37.222 read: IOPS=485, BW=1942KiB/s (1989kB/s)(19.0MiB/10004msec) 00:31:37.222 slat (nsec): min=5559, max=92173, avg=16240.72, stdev=12999.81 00:31:37.222 clat (usec): min=3724, max=65307, avg=32843.16, stdev=3736.33 00:31:37.222 lat (usec): min=3730, max=65323, avg=32859.40, stdev=3736.74 00:31:37.222 clat percentiles (usec): 00:31:37.222 | 1.00th=[19006], 5.00th=[30802], 10.00th=[31589], 20.00th=[32113], 00:31:37.222 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:31:37.222 | 70.00th=[33162], 80.00th=[33424], 90.00th=[34341], 95.00th=[35914], 00:31:37.222 | 99.00th=[46924], 99.50th=[53740], 99.90th=[55837], 99.95th=[65274], 00:31:37.222 | 99.99th=[65274] 00:31:37.222 bw ( KiB/s): min= 1763, max= 2036, per=4.09%, avg=1933.00, stdev=62.91, samples=19 00:31:37.222 iops : min= 440, max= 509, avg=483.21, stdev=15.84, samples=19 00:31:37.222 lat (msec) : 4=0.25%, 20=0.84%, 50=98.09%, 100=0.82% 00:31:37.222 cpu : usr=98.82%, sys=0.87%, ctx=21, majf=0, minf=47 00:31:37.222 IO depths : 1=0.9%, 2=3.3%, 4=15.2%, 8=67.1%, 16=13.4%, 32=0.0%, >=64=0.0% 00:31:37.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.222 complete : 0=0.0%, 4=92.6%, 8=3.4%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.222 issued rwts: total=4858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.222 filename1: (groupid=0, jobs=1): err= 0: pid=2189822: Mon Jul 15 21:22:02 2024 00:31:37.222 read: IOPS=490, BW=1961KiB/s (2009kB/s)(19.2MiB/10017msec) 00:31:37.222 slat (nsec): min=5588, max=59269, avg=12402.76, stdev=7885.97 00:31:37.222 clat (usec): min=16824, max=54761, avg=32515.82, stdev=1711.39 00:31:37.222 lat (usec): min=16830, max=54767, avg=32528.22, stdev=1711.87 00:31:37.222 clat percentiles (usec): 00:31:37.222 | 1.00th=[23462], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:37.222 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:37.222 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:31:37.222 | 99.00th=[35390], 99.50th=[35914], 99.90th=[39060], 99.95th=[42730], 00:31:37.222 | 99.99th=[54789] 00:31:37.222 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1960.32, stdev=74.14, samples=19 00:31:37.222 iops : min= 448, max= 512, avg=490.00, stdev=18.51, samples=19 00:31:37.223 lat (msec) : 20=0.33%, 50=99.63%, 100=0.04% 00:31:37.223 cpu : usr=99.31%, sys=0.40%, ctx=9, majf=0, minf=63 00:31:37.223 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:37.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.223 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.223 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.223 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.223 filename1: (groupid=0, jobs=1): err= 0: pid=2189823: Mon Jul 15 21:22:02 2024 00:31:37.223 read: IOPS=491, BW=1967KiB/s (2014kB/s)(19.2MiB/10022msec) 00:31:37.223 slat (nsec): min=5577, max=64328, avg=10202.07, stdev=6369.40 00:31:37.223 clat (usec): min=15769, max=38966, avg=32452.37, stdev=1885.24 00:31:37.223 lat (usec): min=15775, max=38987, avg=32462.57, stdev=1885.45 00:31:37.223 clat percentiles (usec): 00:31:37.223 | 1.00th=[21627], 5.00th=[31327], 10.00th=[31851], 20.00th=[32113], 00:31:37.223 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:37.223 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:31:37.223 | 99.00th=[35390], 99.50th=[35914], 99.90th=[39060], 99.95th=[39060], 00:31:37.223 | 99.99th=[39060] 00:31:37.223 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1964.80, stdev=62.64, samples=20 00:31:37.223 iops : min= 480, max= 512, avg=491.20, stdev=15.66, samples=20 00:31:37.223 lat (msec) : 20=0.32%, 50=99.68% 00:31:37.223 cpu : usr=99.12%, sys=0.59%, ctx=8, majf=0, minf=48 00:31:37.223 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:37.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.223 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.223 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.223 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.223 filename1: (groupid=0, jobs=1): err= 0: pid=2189824: Mon Jul 15 21:22:02 2024 00:31:37.223 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10008msec) 00:31:37.223 slat (nsec): min=5585, max=93606, avg=22572.17, stdev=14929.39 00:31:37.223 clat (usec): min=14013, max=64423, avg=32525.91, stdev=3257.78 00:31:37.223 lat (usec): min=14019, max=64444, avg=32548.48, stdev=3257.77 00:31:37.223 clat percentiles (usec): 00:31:37.223 | 1.00th=[22152], 5.00th=[30802], 10.00th=[31589], 20.00th=[31851], 00:31:37.223 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:31:37.223 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:31:37.223 | 99.00th=[44303], 99.50th=[53216], 99.90th=[64226], 99.95th=[64226], 00:31:37.223 | 99.99th=[64226] 00:31:37.223 bw ( KiB/s): min= 1792, max= 2080, per=4.12%, avg=1949.47, stdev=69.39, samples=19 00:31:37.223 iops : min= 448, max= 520, avg=487.37, stdev=17.35, samples=19 00:31:37.223 lat (msec) : 20=0.70%, 50=98.57%, 100=0.74% 00:31:37.223 cpu : usr=98.80%, sys=0.74%, ctx=152, majf=0, minf=50 00:31:37.223 IO depths : 1=3.6%, 2=9.0%, 4=23.7%, 8=54.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:31:37.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.223 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.223 issued rwts: total=4892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.223 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.223 filename1: (groupid=0, jobs=1): err= 0: pid=2189825: Mon Jul 15 21:22:02 2024 00:31:37.223 read: IOPS=526, BW=2106KiB/s (2157kB/s)(20.6MiB/10019msec) 00:31:37.223 slat (usec): min=5, max=111, avg=20.53, stdev=16.77 00:31:37.223 clat (usec): min=8079, max=54675, avg=30220.56, stdev=5948.59 00:31:37.223 lat (usec): min=8095, max=54714, avg=30241.09, stdev=5952.26 00:31:37.223 clat percentiles (usec): 00:31:37.223 | 1.00th=[11469], 5.00th=[20579], 10.00th=[21627], 20.00th=[24249], 00:31:37.223 | 30.00th=[31065], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:31:37.223 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33817], 95.00th=[36963], 00:31:37.223 | 99.00th=[50594], 99.50th=[52167], 99.90th=[54789], 99.95th=[54789], 00:31:37.223 | 99.99th=[54789] 00:31:37.223 bw ( KiB/s): min= 1872, max= 2528, per=4.45%, avg=2104.00, stdev=192.32, samples=20 00:31:37.223 iops : min= 468, max= 632, avg=526.00, stdev=48.08, samples=20 00:31:37.223 lat (msec) : 10=0.61%, 20=3.18%, 50=95.03%, 100=1.18% 00:31:37.223 cpu : usr=97.82%, sys=1.27%, ctx=33, majf=0, minf=67 00:31:37.223 IO depths : 1=3.2%, 2=6.5%, 4=16.5%, 8=64.3%, 16=9.5%, 32=0.0%, >=64=0.0% 00:31:37.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.223 complete : 0=0.0%, 4=91.8%, 8=2.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.223 issued rwts: total=5276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.223 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.223 filename2: (groupid=0, jobs=1): err= 0: pid=2189826: Mon Jul 15 21:22:02 2024 00:31:37.223 read: IOPS=492, BW=1968KiB/s (2015kB/s)(19.2MiB/10008msec) 00:31:37.223 slat (nsec): min=5571, max=94665, avg=17160.85, stdev=13622.74 00:31:37.223 clat (usec): min=10983, max=62760, avg=32381.95, stdev=5474.70 00:31:37.223 lat (usec): min=11006, max=62783, avg=32399.11, stdev=5475.94 00:31:37.223 clat percentiles (usec): 00:31:37.223 | 1.00th=[18744], 5.00th=[22414], 10.00th=[25297], 20.00th=[31327], 00:31:37.223 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:31:37.223 | 70.00th=[32900], 80.00th=[33424], 90.00th=[35914], 95.00th=[42206], 00:31:37.223 | 99.00th=[53216], 99.50th=[54264], 99.90th=[62653], 99.95th=[62653], 00:31:37.223 | 99.99th=[62653] 00:31:37.223 bw ( KiB/s): min= 1747, max= 2096, per=4.16%, avg=1964.79, stdev=77.66, samples=19 00:31:37.223 iops : min= 436, max= 524, avg=491.16, stdev=19.53, samples=19 00:31:37.223 lat (msec) : 20=1.95%, 50=96.55%, 100=1.50% 00:31:37.223 cpu : usr=99.00%, sys=0.70%, ctx=12, majf=0, minf=82 00:31:37.223 IO depths : 1=2.6%, 2=5.9%, 4=15.7%, 8=65.4%, 16=10.4%, 32=0.0%, >=64=0.0% 00:31:37.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.223 complete : 0=0.0%, 4=91.7%, 8=3.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.223 issued rwts: total=4924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.223 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.223 filename2: (groupid=0, jobs=1): err= 0: pid=2189827: Mon Jul 15 21:22:02 2024 00:31:37.223 read: IOPS=500, BW=2003KiB/s (2051kB/s)(19.6MiB/10021msec) 00:31:37.223 slat (nsec): min=5579, max=99884, avg=19112.79, stdev=15782.70 00:31:37.223 clat (usec): min=11292, max=58256, avg=31780.98, stdev=4998.60 00:31:37.223 lat (usec): min=11300, max=58267, avg=31800.09, stdev=5000.46 00:31:37.223 clat percentiles (usec): 00:31:37.223 | 1.00th=[19530], 5.00th=[21890], 10.00th=[23987], 20.00th=[31327], 00:31:37.223 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:31:37.223 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[39584], 00:31:37.223 | 99.00th=[50070], 99.50th=[53216], 99.90th=[58459], 99.95th=[58459], 00:31:37.223 | 99.99th=[58459] 00:31:37.223 bw ( KiB/s): min= 1920, max= 2672, per=4.24%, avg=2005.05, stdev=176.69, samples=19 00:31:37.223 iops : min= 480, max= 668, avg=501.26, stdev=44.17, samples=19 00:31:37.223 lat (msec) : 20=2.01%, 50=96.87%, 100=1.12% 00:31:37.223 cpu : usr=99.29%, sys=0.41%, ctx=15, majf=0, minf=71 00:31:37.223 IO depths : 1=3.3%, 2=7.3%, 4=18.4%, 8=61.6%, 16=9.4%, 32=0.0%, >=64=0.0% 00:31:37.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.223 complete : 0=0.0%, 4=92.4%, 8=2.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.223 issued rwts: total=5018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.223 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.223 filename2: (groupid=0, jobs=1): err= 0: pid=2189828: Mon Jul 15 21:22:02 2024 00:31:37.223 read: IOPS=486, BW=1945KiB/s (1992kB/s)(19.0MiB/10003msec) 00:31:37.223 slat (nsec): min=5581, max=74653, avg=15821.86, stdev=11512.53 00:31:37.223 clat (usec): min=13838, max=57184, avg=32791.53, stdev=3469.71 00:31:37.223 lat (usec): min=13844, max=57202, avg=32807.35, stdev=3470.03 00:31:37.223 clat percentiles (usec): 00:31:37.223 | 1.00th=[20317], 5.00th=[31065], 10.00th=[31589], 20.00th=[32113], 00:31:37.223 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:37.223 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34341], 95.00th=[35390], 00:31:37.223 | 99.00th=[49021], 99.50th=[51643], 99.90th=[57410], 99.95th=[57410], 00:31:37.223 | 99.99th=[57410] 00:31:37.223 bw ( KiB/s): min= 1776, max= 2048, per=4.10%, avg=1939.58, stdev=62.10, samples=19 00:31:37.223 iops : min= 444, max= 512, avg=484.89, stdev=15.52, samples=19 00:31:37.223 lat (msec) : 20=0.86%, 50=98.40%, 100=0.74% 00:31:37.223 cpu : usr=99.04%, sys=0.64%, ctx=43, majf=0, minf=103 00:31:37.223 IO depths : 1=1.7%, 2=4.2%, 4=14.5%, 8=66.6%, 16=13.1%, 32=0.0%, >=64=0.0% 00:31:37.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.223 complete : 0=0.0%, 4=92.3%, 8=4.1%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.223 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.223 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.223 filename2: (groupid=0, jobs=1): err= 0: pid=2189829: Mon Jul 15 21:22:02 2024 00:31:37.223 read: IOPS=498, BW=1992KiB/s (2040kB/s)(19.5MiB/10016msec) 00:31:37.223 slat (usec): min=5, max=106, avg=19.96, stdev=16.95 00:31:37.223 clat (usec): min=13885, max=54639, avg=31959.09, stdev=5326.50 00:31:37.223 lat (usec): min=13891, max=54697, avg=31979.06, stdev=5328.08 00:31:37.223 clat percentiles (usec): 00:31:37.223 | 1.00th=[18482], 5.00th=[21627], 10.00th=[24511], 20.00th=[31327], 00:31:37.223 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:31:37.223 | 70.00th=[32900], 80.00th=[33162], 90.00th=[34341], 95.00th=[41681], 00:31:37.223 | 99.00th=[51119], 99.50th=[52691], 99.90th=[54789], 99.95th=[54789], 00:31:37.223 | 99.99th=[54789] 00:31:37.223 bw ( KiB/s): min= 1888, max= 2128, per=4.21%, avg=1988.80, stdev=74.66, samples=20 00:31:37.223 iops : min= 472, max= 532, avg=497.20, stdev=18.66, samples=20 00:31:37.223 lat (msec) : 20=2.57%, 50=95.85%, 100=1.58% 00:31:37.223 cpu : usr=99.14%, sys=0.57%, ctx=10, majf=0, minf=55 00:31:37.223 IO depths : 1=3.9%, 2=8.0%, 4=19.2%, 8=60.1%, 16=8.8%, 32=0.0%, >=64=0.0% 00:31:37.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.223 complete : 0=0.0%, 4=92.6%, 8=1.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.223 issued rwts: total=4988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.223 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.223 filename2: (groupid=0, jobs=1): err= 0: pid=2189830: Mon Jul 15 21:22:02 2024 00:31:37.223 read: IOPS=490, BW=1961KiB/s (2009kB/s)(19.2MiB/10017msec) 00:31:37.223 slat (usec): min=5, max=112, avg=22.47, stdev=17.09 00:31:37.223 clat (usec): min=18159, max=43884, avg=32431.20, stdev=1441.99 00:31:37.223 lat (usec): min=18168, max=43900, avg=32453.66, stdev=1441.65 00:31:37.223 clat percentiles (usec): 00:31:37.223 | 1.00th=[23725], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:37.223 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:37.223 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:31:37.223 | 99.00th=[34866], 99.50th=[35914], 99.90th=[37487], 99.95th=[39060], 00:31:37.223 | 99.99th=[43779] 00:31:37.223 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=1960.32, stdev=60.63, samples=19 00:31:37.223 iops : min= 480, max= 512, avg=490.00, stdev=15.13, samples=19 00:31:37.223 lat (msec) : 20=0.33%, 50=99.67% 00:31:37.224 cpu : usr=97.97%, sys=1.10%, ctx=180, majf=0, minf=66 00:31:37.224 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:37.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.224 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.224 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.224 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.224 filename2: (groupid=0, jobs=1): err= 0: pid=2189831: Mon Jul 15 21:22:02 2024 00:31:37.224 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.7MiB/10004msec) 00:31:37.224 slat (nsec): min=5572, max=96200, avg=15227.26, stdev=12506.46 00:31:37.224 clat (usec): min=4074, max=59415, avg=33310.48, stdev=5528.41 00:31:37.224 lat (usec): min=4080, max=59422, avg=33325.71, stdev=5528.88 00:31:37.224 clat percentiles (usec): 00:31:37.224 | 1.00th=[18744], 5.00th=[24249], 10.00th=[28443], 20.00th=[31851], 00:31:37.224 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:31:37.224 | 70.00th=[33424], 80.00th=[34341], 90.00th=[40109], 95.00th=[44303], 00:31:37.224 | 99.00th=[51119], 99.50th=[55837], 99.90th=[59507], 99.95th=[59507], 00:31:37.224 | 99.99th=[59507] 00:31:37.224 bw ( KiB/s): min= 1760, max= 2100, per=4.02%, avg=1901.84, stdev=81.08, samples=19 00:31:37.224 iops : min= 440, max= 525, avg=475.42, stdev=20.25, samples=19 00:31:37.224 lat (msec) : 10=0.33%, 20=1.11%, 50=97.39%, 100=1.17% 00:31:37.224 cpu : usr=98.90%, sys=0.71%, ctx=53, majf=0, minf=54 00:31:37.224 IO depths : 1=0.1%, 2=0.5%, 4=6.7%, 8=78.5%, 16=14.1%, 32=0.0%, >=64=0.0% 00:31:37.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.224 complete : 0=0.0%, 4=89.7%, 8=6.3%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.224 issued rwts: total=4794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.224 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.224 filename2: (groupid=0, jobs=1): err= 0: pid=2189832: Mon Jul 15 21:22:02 2024 00:31:37.224 read: IOPS=475, BW=1902KiB/s (1948kB/s)(18.6MiB/10003msec) 00:31:37.224 slat (nsec): min=5561, max=80708, avg=14094.39, stdev=11059.38 00:31:37.224 clat (usec): min=11193, max=89559, avg=33565.82, stdev=5998.10 00:31:37.224 lat (usec): min=11198, max=89576, avg=33579.92, stdev=5998.03 00:31:37.224 clat percentiles (usec): 00:31:37.224 | 1.00th=[19268], 5.00th=[25035], 10.00th=[27919], 20.00th=[31851], 00:31:37.224 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[33162], 00:31:37.224 | 70.00th=[33424], 80.00th=[34866], 90.00th=[40633], 95.00th=[45351], 00:31:37.224 | 99.00th=[52691], 99.50th=[55313], 99.90th=[89654], 99.95th=[89654], 00:31:37.224 | 99.99th=[89654] 00:31:37.224 bw ( KiB/s): min= 1624, max= 2000, per=4.01%, avg=1893.89, stdev=88.28, samples=19 00:31:37.224 iops : min= 406, max= 500, avg=473.47, stdev=22.07, samples=19 00:31:37.224 lat (msec) : 20=1.35%, 50=96.55%, 100=2.10% 00:31:37.224 cpu : usr=99.03%, sys=0.65%, ctx=52, majf=0, minf=78 00:31:37.224 IO depths : 1=0.7%, 2=1.4%, 4=9.0%, 8=75.6%, 16=13.3%, 32=0.0%, >=64=0.0% 00:31:37.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.224 complete : 0=0.0%, 4=90.3%, 8=5.4%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.224 issued rwts: total=4757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.224 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.224 filename2: (groupid=0, jobs=1): err= 0: pid=2189833: Mon Jul 15 21:22:02 2024 00:31:37.224 read: IOPS=503, BW=2014KiB/s (2062kB/s)(19.7MiB/10032msec) 00:31:37.224 slat (usec): min=5, max=118, avg=14.51, stdev=13.82 00:31:37.224 clat (usec): min=7546, max=52979, avg=31671.50, stdev=4314.81 00:31:37.224 lat (usec): min=7557, max=52985, avg=31686.01, stdev=4315.47 00:31:37.224 clat percentiles (usec): 00:31:37.224 | 1.00th=[ 8848], 5.00th=[23200], 10.00th=[30278], 20.00th=[31851], 00:31:37.224 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:37.224 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:31:37.224 | 99.00th=[40633], 99.50th=[41681], 99.90th=[52691], 99.95th=[53216], 00:31:37.224 | 99.99th=[53216] 00:31:37.224 bw ( KiB/s): min= 1916, max= 2320, per=4.26%, avg=2013.40, stdev=137.03, samples=20 00:31:37.224 iops : min= 479, max= 580, avg=503.35, stdev=34.26, samples=20 00:31:37.224 lat (msec) : 10=1.27%, 20=2.18%, 50=96.44%, 100=0.12% 00:31:37.224 cpu : usr=99.11%, sys=0.59%, ctx=18, majf=0, minf=93 00:31:37.224 IO depths : 1=5.1%, 2=10.3%, 4=22.0%, 8=55.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:31:37.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.224 complete : 0=0.0%, 4=93.3%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.224 issued rwts: total=5050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.224 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.224 00:31:37.224 Run status group 0 (all jobs): 00:31:37.224 READ: bw=46.2MiB/s (48.4MB/s), 1902KiB/s-2106KiB/s (1948kB/s-2157kB/s), io=463MiB (485MB), run=10003-10032msec 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:37.224 bdev_null0 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:37.224 [2024-07-15 21:22:02.935302] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:37.224 bdev_null1 00:31:37.224 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:37.225 { 00:31:37.225 "params": { 00:31:37.225 "name": "Nvme$subsystem", 00:31:37.225 "trtype": "$TEST_TRANSPORT", 00:31:37.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:37.225 "adrfam": "ipv4", 00:31:37.225 "trsvcid": "$NVMF_PORT", 00:31:37.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:37.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:37.225 "hdgst": ${hdgst:-false}, 00:31:37.225 "ddgst": ${ddgst:-false} 00:31:37.225 }, 00:31:37.225 "method": "bdev_nvme_attach_controller" 00:31:37.225 } 00:31:37.225 EOF 00:31:37.225 )") 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:37.225 { 00:31:37.225 "params": { 00:31:37.225 "name": "Nvme$subsystem", 00:31:37.225 "trtype": "$TEST_TRANSPORT", 00:31:37.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:37.225 "adrfam": "ipv4", 00:31:37.225 "trsvcid": "$NVMF_PORT", 00:31:37.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:37.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:37.225 "hdgst": ${hdgst:-false}, 00:31:37.225 "ddgst": ${ddgst:-false} 00:31:37.225 }, 00:31:37.225 "method": "bdev_nvme_attach_controller" 00:31:37.225 } 00:31:37.225 EOF 00:31:37.225 )") 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:37.225 21:22:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:37.225 21:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:37.225 21:22:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:37.225 21:22:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:37.225 21:22:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:37.225 "params": { 00:31:37.225 "name": "Nvme0", 00:31:37.225 "trtype": "tcp", 00:31:37.225 "traddr": "10.0.0.2", 00:31:37.225 "adrfam": "ipv4", 00:31:37.225 "trsvcid": "4420", 00:31:37.225 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:37.225 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:37.225 "hdgst": false, 00:31:37.225 "ddgst": false 00:31:37.225 }, 00:31:37.225 "method": "bdev_nvme_attach_controller" 00:31:37.225 },{ 00:31:37.225 "params": { 00:31:37.225 "name": "Nvme1", 00:31:37.225 "trtype": "tcp", 00:31:37.225 "traddr": "10.0.0.2", 00:31:37.225 "adrfam": "ipv4", 00:31:37.225 "trsvcid": "4420", 00:31:37.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:37.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:37.225 "hdgst": false, 00:31:37.225 "ddgst": false 00:31:37.225 }, 00:31:37.225 "method": "bdev_nvme_attach_controller" 00:31:37.225 }' 00:31:37.225 21:22:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:37.225 21:22:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:37.225 21:22:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:37.225 21:22:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:37.225 21:22:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:37.225 21:22:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:37.225 21:22:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:37.225 21:22:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:37.225 21:22:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:37.225 21:22:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:37.225 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:37.225 ... 00:31:37.225 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:37.225 ... 00:31:37.225 fio-3.35 00:31:37.225 Starting 4 threads 00:31:37.225 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.508 00:31:42.508 filename0: (groupid=0, jobs=1): err= 0: pid=2192019: Mon Jul 15 21:22:09 2024 00:31:42.508 read: IOPS=2058, BW=16.1MiB/s (16.9MB/s)(80.5MiB/5003msec) 00:31:42.508 slat (nsec): min=5399, max=55092, avg=7753.05, stdev=3123.95 00:31:42.508 clat (usec): min=2122, max=44117, avg=3864.23, stdev=1282.60 00:31:42.508 lat (usec): min=2130, max=44152, avg=3871.98, stdev=1282.76 00:31:42.508 clat percentiles (usec): 00:31:42.508 | 1.00th=[ 2835], 5.00th=[ 3130], 10.00th=[ 3294], 20.00th=[ 3458], 00:31:42.508 | 30.00th=[ 3556], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3752], 00:31:42.508 | 70.00th=[ 3785], 80.00th=[ 3916], 90.00th=[ 4883], 95.00th=[ 5407], 00:31:42.508 | 99.00th=[ 5932], 99.50th=[ 5997], 99.90th=[ 6325], 99.95th=[44303], 00:31:42.508 | 99.99th=[44303] 00:31:42.508 bw ( KiB/s): min=14928, max=17168, per=24.58%, avg=16476.44, stdev=642.98, samples=9 00:31:42.508 iops : min= 1866, max= 2146, avg=2059.56, stdev=80.37, samples=9 00:31:42.508 lat (msec) : 4=81.84%, 10=18.08%, 50=0.08% 00:31:42.508 cpu : usr=96.86%, sys=2.88%, ctx=12, majf=0, minf=83 00:31:42.508 IO depths : 1=0.1%, 2=0.3%, 4=72.6%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.508 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.508 issued rwts: total=10299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.508 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:42.508 filename0: (groupid=0, jobs=1): err= 0: pid=2192020: Mon Jul 15 21:22:09 2024 00:31:42.508 read: IOPS=2198, BW=17.2MiB/s (18.0MB/s)(85.9MiB/5002msec) 00:31:42.508 slat (nsec): min=5392, max=78288, avg=6166.08, stdev=1986.45 00:31:42.508 clat (usec): min=1128, max=6241, avg=3620.94, stdev=542.16 00:31:42.508 lat (usec): min=1135, max=6247, avg=3627.10, stdev=541.89 00:31:42.508 clat percentiles (usec): 00:31:42.508 | 1.00th=[ 1811], 5.00th=[ 2704], 10.00th=[ 2999], 20.00th=[ 3326], 00:31:42.508 | 30.00th=[ 3523], 40.00th=[ 3556], 50.00th=[ 3654], 60.00th=[ 3785], 00:31:42.508 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 4178], 95.00th=[ 4490], 00:31:42.508 | 99.00th=[ 5211], 99.50th=[ 5473], 99.90th=[ 5866], 99.95th=[ 6128], 00:31:42.509 | 99.99th=[ 6259] 00:31:42.509 bw ( KiB/s): min=16624, max=19158, per=26.22%, avg=17579.33, stdev=713.48, samples=9 00:31:42.509 iops : min= 2078, max= 2394, avg=2197.33, stdev=88.98, samples=9 00:31:42.509 lat (msec) : 2=1.24%, 4=84.42%, 10=14.34% 00:31:42.509 cpu : usr=97.46%, sys=2.24%, ctx=15, majf=0, minf=91 00:31:42.509 IO depths : 1=0.3%, 2=2.9%, 4=68.3%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.509 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.509 issued rwts: total=10998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.509 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:42.509 filename1: (groupid=0, jobs=1): err= 0: pid=2192021: Mon Jul 15 21:22:09 2024 00:31:42.509 read: IOPS=2061, BW=16.1MiB/s (16.9MB/s)(80.6MiB/5003msec) 00:31:42.509 slat (nsec): min=7857, max=56703, avg=9200.25, stdev=3074.45 00:31:42.509 clat (usec): min=2111, max=45261, avg=3855.37, stdev=1267.99 00:31:42.509 lat (usec): min=2118, max=45296, avg=3864.57, stdev=1268.10 00:31:42.509 clat percentiles (usec): 00:31:42.509 | 1.00th=[ 2835], 5.00th=[ 3228], 10.00th=[ 3359], 20.00th=[ 3523], 00:31:42.509 | 30.00th=[ 3556], 40.00th=[ 3654], 50.00th=[ 3752], 60.00th=[ 3785], 00:31:42.509 | 70.00th=[ 3818], 80.00th=[ 4047], 90.00th=[ 4490], 95.00th=[ 5145], 00:31:42.509 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 6390], 99.95th=[45351], 00:31:42.509 | 99.99th=[45351] 00:31:42.509 bw ( KiB/s): min=15406, max=17008, per=24.61%, avg=16495.80, stdev=491.39, samples=10 00:31:42.509 iops : min= 1925, max= 2126, avg=2061.90, stdev=61.61, samples=10 00:31:42.509 lat (msec) : 4=79.12%, 10=20.80%, 50=0.08% 00:31:42.509 cpu : usr=96.64%, sys=3.06%, ctx=11, majf=0, minf=76 00:31:42.509 IO depths : 1=0.1%, 2=0.7%, 4=71.7%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.509 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.509 issued rwts: total=10313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.509 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:42.509 filename1: (groupid=0, jobs=1): err= 0: pid=2192022: Mon Jul 15 21:22:09 2024 00:31:42.509 read: IOPS=2061, BW=16.1MiB/s (16.9MB/s)(80.6MiB/5002msec) 00:31:42.509 slat (nsec): min=5392, max=45251, avg=6830.50, stdev=2254.01 00:31:42.509 clat (usec): min=1785, max=6588, avg=3861.82, stdev=654.44 00:31:42.509 lat (usec): min=1793, max=6598, avg=3868.65, stdev=654.31 00:31:42.509 clat percentiles (usec): 00:31:42.509 | 1.00th=[ 2606], 5.00th=[ 3130], 10.00th=[ 3359], 20.00th=[ 3458], 00:31:42.509 | 30.00th=[ 3556], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3785], 00:31:42.509 | 70.00th=[ 3785], 80.00th=[ 4080], 90.00th=[ 5014], 95.00th=[ 5407], 00:31:42.509 | 99.00th=[ 5997], 99.50th=[ 5997], 99.90th=[ 6259], 99.95th=[ 6325], 00:31:42.509 | 99.99th=[ 6587] 00:31:42.509 bw ( KiB/s): min=15920, max=17424, per=24.58%, avg=16480.00, stdev=480.93, samples=9 00:31:42.509 iops : min= 1990, max= 2178, avg=2060.00, stdev=60.12, samples=9 00:31:42.509 lat (msec) : 2=0.16%, 4=78.16%, 10=21.68% 00:31:42.509 cpu : usr=97.68%, sys=2.02%, ctx=45, majf=0, minf=98 00:31:42.509 IO depths : 1=0.1%, 2=0.4%, 4=72.4%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.509 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.509 issued rwts: total=10311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.509 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:42.509 00:31:42.509 Run status group 0 (all jobs): 00:31:42.509 READ: bw=65.5MiB/s (68.6MB/s), 16.1MiB/s-17.2MiB/s (16.9MB/s-18.0MB/s), io=328MiB (343MB), run=5002-5003msec 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.509 00:31:42.509 real 0m24.323s 00:31:42.509 user 5m19.786s 00:31:42.509 sys 0m3.691s 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:42.509 21:22:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.509 ************************************ 00:31:42.509 END TEST fio_dif_rand_params 00:31:42.509 ************************************ 00:31:42.509 21:22:09 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:42.509 21:22:09 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:42.509 21:22:09 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:42.509 21:22:09 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:42.509 21:22:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:42.509 ************************************ 00:31:42.509 START TEST fio_dif_digest 00:31:42.509 ************************************ 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:42.509 bdev_null0 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:42.509 [2024-07-15 21:22:09.397750] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:42.509 { 00:31:42.509 "params": { 00:31:42.509 "name": "Nvme$subsystem", 00:31:42.509 "trtype": "$TEST_TRANSPORT", 00:31:42.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:42.509 "adrfam": "ipv4", 00:31:42.509 "trsvcid": "$NVMF_PORT", 00:31:42.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:42.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:42.509 "hdgst": ${hdgst:-false}, 00:31:42.509 "ddgst": ${ddgst:-false} 00:31:42.509 }, 00:31:42.509 "method": "bdev_nvme_attach_controller" 00:31:42.509 } 00:31:42.509 EOF 00:31:42.509 )") 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:42.509 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:42.510 "params": { 00:31:42.510 "name": "Nvme0", 00:31:42.510 "trtype": "tcp", 00:31:42.510 "traddr": "10.0.0.2", 00:31:42.510 "adrfam": "ipv4", 00:31:42.510 "trsvcid": "4420", 00:31:42.510 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:42.510 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:42.510 "hdgst": true, 00:31:42.510 "ddgst": true 00:31:42.510 }, 00:31:42.510 "method": "bdev_nvme_attach_controller" 00:31:42.510 }' 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:42.510 21:22:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.812 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:42.812 ... 00:31:42.812 fio-3.35 00:31:42.812 Starting 3 threads 00:31:42.812 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.062 00:31:55.062 filename0: (groupid=0, jobs=1): err= 0: pid=2193537: Mon Jul 15 21:22:20 2024 00:31:55.062 read: IOPS=214, BW=26.9MiB/s (28.2MB/s)(270MiB/10047msec) 00:31:55.062 slat (nsec): min=5645, max=37589, avg=7870.99, stdev=2066.13 00:31:55.062 clat (usec): min=7062, max=56917, avg=13930.52, stdev=3553.86 00:31:55.062 lat (usec): min=7068, max=56924, avg=13938.39, stdev=3553.94 00:31:55.062 clat percentiles (usec): 00:31:55.062 | 1.00th=[ 9503], 5.00th=[10814], 10.00th=[11731], 20.00th=[12649], 00:31:55.062 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:31:55.062 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15401], 95.00th=[15926], 00:31:55.062 | 99.00th=[16909], 99.50th=[52691], 99.90th=[55313], 99.95th=[56361], 00:31:55.062 | 99.99th=[56886] 00:31:55.062 bw ( KiB/s): min=23040, max=29440, per=34.00%, avg=27609.60, stdev=1623.93, samples=20 00:31:55.062 iops : min= 180, max= 230, avg=215.70, stdev=12.69, samples=20 00:31:55.062 lat (msec) : 10=1.76%, 20=97.59%, 50=0.05%, 100=0.60% 00:31:55.062 cpu : usr=95.77%, sys=3.96%, ctx=20, majf=0, minf=64 00:31:55.062 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:55.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.062 issued rwts: total=2159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.062 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:55.062 filename0: (groupid=0, jobs=1): err= 0: pid=2193538: Mon Jul 15 21:22:20 2024 00:31:55.062 read: IOPS=220, BW=27.6MiB/s (28.9MB/s)(277MiB/10046msec) 00:31:55.062 slat (nsec): min=5654, max=50406, avg=8049.01, stdev=2086.23 00:31:55.062 clat (usec): min=7690, max=56116, avg=13558.15, stdev=3515.33 00:31:55.062 lat (usec): min=7696, max=56125, avg=13566.20, stdev=3515.48 00:31:55.062 clat percentiles (usec): 00:31:55.062 | 1.00th=[ 8979], 5.00th=[10159], 10.00th=[11338], 20.00th=[12387], 00:31:55.062 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:31:55.062 | 70.00th=[14091], 80.00th=[14484], 90.00th=[15008], 95.00th=[15664], 00:31:55.062 | 99.00th=[16909], 99.50th=[51119], 99.90th=[55837], 99.95th=[55837], 00:31:55.062 | 99.99th=[56361] 00:31:55.062 bw ( KiB/s): min=26368, max=30976, per=34.93%, avg=28364.80, stdev=1463.79, samples=20 00:31:55.062 iops : min= 206, max= 242, avg=221.60, stdev=11.44, samples=20 00:31:55.062 lat (msec) : 10=3.97%, 20=95.40%, 50=0.05%, 100=0.59% 00:31:55.062 cpu : usr=95.12%, sys=4.59%, ctx=23, majf=0, minf=226 00:31:55.062 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:55.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.062 issued rwts: total=2218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.062 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:55.062 filename0: (groupid=0, jobs=1): err= 0: pid=2193539: Mon Jul 15 21:22:20 2024 00:31:55.062 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(250MiB/10008msec) 00:31:55.062 slat (nsec): min=5879, max=61023, avg=9638.22, stdev=2003.90 00:31:55.062 clat (usec): min=6375, max=59766, avg=15027.52, stdev=5732.71 00:31:55.062 lat (usec): min=6383, max=59776, avg=15037.16, stdev=5732.73 00:31:55.062 clat percentiles (usec): 00:31:55.062 | 1.00th=[ 9241], 5.00th=[11731], 10.00th=[12649], 20.00th=[13435], 00:31:55.062 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14353], 60.00th=[14746], 00:31:55.062 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16057], 95.00th=[16712], 00:31:55.062 | 99.00th=[55837], 99.50th=[56886], 99.90th=[57934], 99.95th=[59507], 00:31:55.062 | 99.99th=[59507] 00:31:55.062 bw ( KiB/s): min=20992, max=30208, per=31.44%, avg=25525.80, stdev=2178.24, samples=20 00:31:55.062 iops : min= 164, max= 236, avg=199.40, stdev=17.01, samples=20 00:31:55.062 lat (msec) : 10=2.00%, 20=96.19%, 100=1.80% 00:31:55.062 cpu : usr=96.42%, sys=3.29%, ctx=22, majf=0, minf=142 00:31:55.062 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:55.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.062 issued rwts: total=1996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.062 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:55.062 00:31:55.062 Run status group 0 (all jobs): 00:31:55.062 READ: bw=79.3MiB/s (83.1MB/s), 24.9MiB/s-27.6MiB/s (26.1MB/s-28.9MB/s), io=797MiB (835MB), run=10008-10047msec 00:31:55.062 21:22:20 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:55.062 21:22:20 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:55.062 21:22:20 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:55.062 21:22:20 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:55.062 21:22:20 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:55.062 21:22:20 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:55.062 21:22:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.062 21:22:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:55.062 21:22:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.062 21:22:20 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:55.062 21:22:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.062 21:22:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:55.062 21:22:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.062 00:31:55.062 real 0m11.144s 00:31:55.062 user 0m41.888s 00:31:55.062 sys 0m1.528s 00:31:55.062 21:22:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:55.062 21:22:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:55.063 ************************************ 00:31:55.063 END TEST fio_dif_digest 00:31:55.063 ************************************ 00:31:55.063 21:22:20 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:55.063 21:22:20 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:55.063 21:22:20 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:55.063 21:22:20 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:55.063 21:22:20 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:55.063 21:22:20 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:55.063 21:22:20 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:55.063 21:22:20 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:55.063 21:22:20 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:55.063 rmmod nvme_tcp 00:31:55.063 rmmod nvme_fabrics 00:31:55.063 rmmod nvme_keyring 00:31:55.063 21:22:20 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:55.063 21:22:20 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:55.063 21:22:20 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:55.063 21:22:20 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2183071 ']' 00:31:55.063 21:22:20 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2183071 00:31:55.063 21:22:20 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2183071 ']' 00:31:55.063 21:22:20 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2183071 00:31:55.063 21:22:20 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:31:55.063 21:22:20 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:55.063 21:22:20 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2183071 00:31:55.063 21:22:20 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:55.063 21:22:20 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:55.063 21:22:20 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2183071' 00:31:55.063 killing process with pid 2183071 00:31:55.063 21:22:20 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2183071 00:31:55.063 21:22:20 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2183071 00:31:55.063 21:22:20 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:55.063 21:22:20 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:57.604 Waiting for block devices as requested 00:31:57.604 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:57.604 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:57.604 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:57.604 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:57.604 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:57.604 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:57.604 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:57.865 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:57.865 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:58.125 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:58.125 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:58.125 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:58.125 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:58.385 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:58.385 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:58.385 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:58.385 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:58.647 21:22:25 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:58.647 21:22:25 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:58.647 21:22:25 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:58.647 21:22:25 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:58.647 21:22:25 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.647 21:22:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:58.647 21:22:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.560 21:22:27 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:00.560 00:32:00.560 real 1m18.471s 00:32:00.560 user 8m6.209s 00:32:00.560 sys 0m20.252s 00:32:00.560 21:22:27 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:00.560 21:22:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:00.560 ************************************ 00:32:00.560 END TEST nvmf_dif 00:32:00.560 ************************************ 00:32:00.560 21:22:27 -- common/autotest_common.sh@1142 -- # return 0 00:32:00.560 21:22:27 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:00.560 21:22:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:00.560 21:22:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:00.560 21:22:27 -- common/autotest_common.sh@10 -- # set +x 00:32:00.560 ************************************ 00:32:00.560 START TEST nvmf_abort_qd_sizes 00:32:00.560 ************************************ 00:32:00.560 21:22:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:00.822 * Looking for test storage... 00:32:00.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:32:00.822 21:22:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:08.959 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:08.960 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:08.960 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:08.960 Found net devices under 0000:31:00.0: cvl_0_0 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:08.960 Found net devices under 0000:31:00.1: cvl_0_1 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:08.960 21:22:35 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:08.960 21:22:36 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:08.960 21:22:36 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:08.960 21:22:36 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:08.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:08.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:32:08.960 00:32:08.960 --- 10.0.0.2 ping statistics --- 00:32:08.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.960 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:32:08.960 21:22:36 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:08.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:08.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:32:08.960 00:32:08.960 --- 10.0.0.1 ping statistics --- 00:32:08.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.960 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:32:08.960 21:22:36 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:08.960 21:22:36 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:32:08.960 21:22:36 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:08.960 21:22:36 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:13.170 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:13.170 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:13.170 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:13.170 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:13.170 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:13.170 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:13.170 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:13.170 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:13.170 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:13.170 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:13.170 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:13.170 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:13.170 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:13.170 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:13.170 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:13.170 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:13.170 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:13.170 21:22:40 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:13.170 21:22:40 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:13.170 21:22:40 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:13.170 21:22:40 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:13.170 21:22:40 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:13.170 21:22:40 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:13.170 21:22:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:13.170 21:22:40 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:13.170 21:22:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:13.170 21:22:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:13.170 21:22:40 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2203558 00:32:13.170 21:22:40 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2203558 00:32:13.170 21:22:40 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:13.171 21:22:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2203558 ']' 00:32:13.171 21:22:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:13.171 21:22:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:13.171 21:22:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:13.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:13.171 21:22:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:13.171 21:22:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:13.171 [2024-07-15 21:22:40.216161] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:32:13.171 [2024-07-15 21:22:40.216219] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:13.171 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.171 [2024-07-15 21:22:40.295477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:13.171 [2024-07-15 21:22:40.370841] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:13.171 [2024-07-15 21:22:40.370881] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:13.171 [2024-07-15 21:22:40.370889] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:13.171 [2024-07-15 21:22:40.370895] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:13.171 [2024-07-15 21:22:40.370901] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:13.171 [2024-07-15 21:22:40.371040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.171 [2024-07-15 21:22:40.371155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:13.171 [2024-07-15 21:22:40.371346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:13.171 [2024-07-15 21:22:40.371470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.764 21:22:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:13.764 21:22:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:32:13.764 21:22:40 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:13.764 21:22:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:13.764 21:22:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:13.764 21:22:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:14.024 ************************************ 00:32:14.024 START TEST spdk_target_abort 00:32:14.024 ************************************ 00:32:14.024 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:32:14.024 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:14.024 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:32:14.024 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.024 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:14.285 spdk_targetn1 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:14.285 [2024-07-15 21:22:41.405237] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:14.285 [2024-07-15 21:22:41.445470] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:14.285 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:14.286 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:14.286 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:14.286 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:14.286 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:14.286 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:14.286 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:14.286 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:14.286 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:14.286 21:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:14.286 EAL: No free 2048 kB hugepages reported on node 1 00:32:14.546 [2024-07-15 21:22:41.689663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3000 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:32:14.546 [2024-07-15 21:22:41.689690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:17.842 Initializing NVMe Controllers 00:32:17.842 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:17.843 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:17.843 Initialization complete. Launching workers. 00:32:17.843 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11223, failed: 1 00:32:17.843 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2964, failed to submit 8260 00:32:17.843 success 724, unsuccess 2240, failed 0 00:32:17.843 21:22:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:17.843 21:22:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:17.843 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.843 [2024-07-15 21:22:44.934303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:2384 len:8 PRP1 0x200007c52000 PRP2 0x0 00:32:17.843 [2024-07-15 21:22:44.934344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:17.843 [2024-07-15 21:22:44.942372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:2528 len:8 PRP1 0x200007c50000 PRP2 0x0 00:32:17.843 [2024-07-15 21:22:44.942393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:19.754 [2024-07-15 21:22:47.022991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:50640 len:8 PRP1 0x200007c3a000 PRP2 0x0 00:32:19.754 [2024-07-15 21:22:47.023027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:00bc p:1 m:0 dnr:0 00:32:20.325 [2024-07-15 21:22:47.459461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:60680 len:8 PRP1 0x200007c58000 PRP2 0x0 00:32:20.326 [2024-07-15 21:22:47.459494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:00a2 p:0 m:0 dnr:0 00:32:20.897 Initializing NVMe Controllers 00:32:20.897 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:20.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:20.897 Initialization complete. Launching workers. 00:32:20.897 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8675, failed: 4 00:32:20.897 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1206, failed to submit 7473 00:32:20.897 success 378, unsuccess 828, failed 0 00:32:20.897 21:22:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:20.897 21:22:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:20.897 EAL: No free 2048 kB hugepages reported on node 1 00:32:24.198 Initializing NVMe Controllers 00:32:24.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:24.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:24.198 Initialization complete. Launching workers. 00:32:24.198 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41469, failed: 0 00:32:24.198 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2684, failed to submit 38785 00:32:24.198 success 616, unsuccess 2068, failed 0 00:32:24.198 21:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:24.198 21:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.198 21:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:24.198 21:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.198 21:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:24.198 21:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.198 21:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2203558 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2203558 ']' 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2203558 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2203558 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2203558' 00:32:26.112 killing process with pid 2203558 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2203558 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2203558 00:32:26.112 00:32:26.112 real 0m12.153s 00:32:26.112 user 0m49.473s 00:32:26.112 sys 0m1.763s 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:26.112 ************************************ 00:32:26.112 END TEST spdk_target_abort 00:32:26.112 ************************************ 00:32:26.112 21:22:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:26.112 21:22:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:26.112 21:22:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:26.112 21:22:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:26.112 21:22:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:26.112 ************************************ 00:32:26.112 START TEST kernel_target_abort 00:32:26.112 ************************************ 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:26.112 21:22:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:30.316 Waiting for block devices as requested 00:32:30.316 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:30.316 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:30.316 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:30.316 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:30.316 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:30.316 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:30.316 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:30.316 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:30.575 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:30.575 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:30.575 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:30.836 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:30.836 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:30.836 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:30.836 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:31.096 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:31.096 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:31.096 No valid GPT data, bailing 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:32:31.096 00:32:31.096 Discovery Log Number of Records 2, Generation counter 2 00:32:31.096 =====Discovery Log Entry 0====== 00:32:31.096 trtype: tcp 00:32:31.096 adrfam: ipv4 00:32:31.096 subtype: current discovery subsystem 00:32:31.096 treq: not specified, sq flow control disable supported 00:32:31.096 portid: 1 00:32:31.096 trsvcid: 4420 00:32:31.096 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:31.096 traddr: 10.0.0.1 00:32:31.096 eflags: none 00:32:31.096 sectype: none 00:32:31.096 =====Discovery Log Entry 1====== 00:32:31.096 trtype: tcp 00:32:31.096 adrfam: ipv4 00:32:31.096 subtype: nvme subsystem 00:32:31.096 treq: not specified, sq flow control disable supported 00:32:31.096 portid: 1 00:32:31.096 trsvcid: 4420 00:32:31.096 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:31.096 traddr: 10.0.0.1 00:32:31.096 eflags: none 00:32:31.096 sectype: none 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:31.096 21:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:31.407 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.754 Initializing NVMe Controllers 00:32:34.754 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:34.754 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:34.754 Initialization complete. Launching workers. 00:32:34.754 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 58061, failed: 0 00:32:34.754 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 58061, failed to submit 0 00:32:34.754 success 0, unsuccess 58061, failed 0 00:32:34.754 21:23:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:34.754 21:23:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:34.754 EAL: No free 2048 kB hugepages reported on node 1 00:32:37.295 Initializing NVMe Controllers 00:32:37.295 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:37.295 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:37.295 Initialization complete. Launching workers. 00:32:37.295 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100003, failed: 0 00:32:37.295 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25202, failed to submit 74801 00:32:37.295 success 0, unsuccess 25202, failed 0 00:32:37.295 21:23:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:37.295 21:23:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:37.295 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.589 Initializing NVMe Controllers 00:32:40.589 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:40.589 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:40.589 Initialization complete. Launching workers. 00:32:40.589 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96081, failed: 0 00:32:40.589 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24038, failed to submit 72043 00:32:40.589 success 0, unsuccess 24038, failed 0 00:32:40.589 21:23:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:40.589 21:23:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:40.589 21:23:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:40.589 21:23:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:40.589 21:23:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:40.589 21:23:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:40.589 21:23:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:40.589 21:23:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:40.589 21:23:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:40.589 21:23:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:43.886 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:43.886 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:43.886 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:43.886 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:43.886 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:43.886 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:43.886 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:43.886 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:43.886 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:43.886 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:43.886 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:43.886 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:43.886 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:43.886 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:43.886 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:43.886 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:45.797 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:45.797 00:32:45.797 real 0m19.332s 00:32:45.797 user 0m8.690s 00:32:45.797 sys 0m5.874s 00:32:45.797 21:23:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:45.797 21:23:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:45.797 ************************************ 00:32:45.797 END TEST kernel_target_abort 00:32:45.797 ************************************ 00:32:45.797 21:23:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:45.797 21:23:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:45.797 21:23:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:45.797 21:23:12 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:45.797 21:23:12 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:45.797 21:23:12 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:45.797 21:23:12 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:45.797 21:23:12 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:45.797 21:23:12 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:45.797 rmmod nvme_tcp 00:32:45.797 rmmod nvme_fabrics 00:32:45.797 rmmod nvme_keyring 00:32:45.797 21:23:12 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:45.797 21:23:12 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:45.797 21:23:12 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:45.797 21:23:12 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2203558 ']' 00:32:45.797 21:23:12 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2203558 00:32:45.797 21:23:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2203558 ']' 00:32:45.797 21:23:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2203558 00:32:45.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2203558) - No such process 00:32:45.797 21:23:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2203558 is not found' 00:32:45.797 Process with pid 2203558 is not found 00:32:45.797 21:23:12 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:45.797 21:23:12 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:49.998 Waiting for block devices as requested 00:32:49.998 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:49.998 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:49.998 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:49.998 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:49.998 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:49.998 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:49.998 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:49.998 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:49.998 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:49.998 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:50.259 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:50.259 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:50.259 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:50.259 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:50.520 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:50.520 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:50.520 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:50.520 21:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:50.520 21:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:50.520 21:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:50.520 21:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:50.520 21:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.520 21:23:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:50.520 21:23:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.068 21:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:53.068 00:32:53.068 real 0m52.002s 00:32:53.068 user 1m3.742s 00:32:53.068 sys 0m19.304s 00:32:53.068 21:23:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:53.068 21:23:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:53.068 ************************************ 00:32:53.068 END TEST nvmf_abort_qd_sizes 00:32:53.068 ************************************ 00:32:53.068 21:23:19 -- common/autotest_common.sh@1142 -- # return 0 00:32:53.068 21:23:19 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:53.068 21:23:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:53.068 21:23:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:53.068 21:23:19 -- common/autotest_common.sh@10 -- # set +x 00:32:53.068 ************************************ 00:32:53.068 START TEST keyring_file 00:32:53.068 ************************************ 00:32:53.068 21:23:19 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:53.068 * Looking for test storage... 00:32:53.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:53.068 21:23:20 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:53.068 21:23:20 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:53.068 21:23:20 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:53.068 21:23:20 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:53.068 21:23:20 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:53.068 21:23:20 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.068 21:23:20 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.068 21:23:20 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.068 21:23:20 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:53.068 21:23:20 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:53.068 21:23:20 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:53.068 21:23:20 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:53.068 21:23:20 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:53.068 21:23:20 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:53.068 21:23:20 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:53.068 21:23:20 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:53.068 21:23:20 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:53.068 21:23:20 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:53.068 21:23:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:53.068 21:23:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:53.068 21:23:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:53.068 21:23:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:53.068 21:23:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:53.069 21:23:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.psaV3GZAB9 00:32:53.069 21:23:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:53.069 21:23:20 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:53.069 21:23:20 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:53.069 21:23:20 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:53.069 21:23:20 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:53.069 21:23:20 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:53.069 21:23:20 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:53.069 21:23:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.psaV3GZAB9 00:32:53.069 21:23:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.psaV3GZAB9 00:32:53.069 21:23:20 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.psaV3GZAB9 00:32:53.069 21:23:20 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:53.069 21:23:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:53.069 21:23:20 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:53.069 21:23:20 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:53.069 21:23:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:53.069 21:23:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:53.069 21:23:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Y3Fand6Fnc 00:32:53.069 21:23:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:53.069 21:23:20 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:53.069 21:23:20 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:53.069 21:23:20 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:53.069 21:23:20 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:53.069 21:23:20 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:53.069 21:23:20 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:53.069 21:23:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Y3Fand6Fnc 00:32:53.069 21:23:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Y3Fand6Fnc 00:32:53.069 21:23:20 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Y3Fand6Fnc 00:32:53.069 21:23:20 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:53.069 21:23:20 keyring_file -- keyring/file.sh@30 -- # tgtpid=2213904 00:32:53.069 21:23:20 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2213904 00:32:53.069 21:23:20 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2213904 ']' 00:32:53.069 21:23:20 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:53.069 21:23:20 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:53.069 21:23:20 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:53.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:53.069 21:23:20 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:53.069 21:23:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:53.069 [2024-07-15 21:23:20.210290] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:32:53.069 [2024-07-15 21:23:20.210370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2213904 ] 00:32:53.069 EAL: No free 2048 kB hugepages reported on node 1 00:32:53.069 [2024-07-15 21:23:20.283686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.329 [2024-07-15 21:23:20.360792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.901 21:23:20 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:53.901 21:23:20 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:53.901 21:23:20 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:53.901 21:23:20 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.901 21:23:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:53.901 [2024-07-15 21:23:21.000924] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:53.901 null0 00:32:53.901 [2024-07-15 21:23:21.032966] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:53.901 [2024-07-15 21:23:21.033226] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:53.901 [2024-07-15 21:23:21.040971] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:53.901 21:23:21 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.901 21:23:21 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:53.901 21:23:21 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:53.901 21:23:21 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:53.901 21:23:21 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:53.901 21:23:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:53.901 21:23:21 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:53.901 21:23:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:53.901 21:23:21 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:53.901 21:23:21 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.901 21:23:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:53.901 [2024-07-15 21:23:21.057017] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:53.901 request: 00:32:53.901 { 00:32:53.901 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:53.901 "secure_channel": false, 00:32:53.901 "listen_address": { 00:32:53.901 "trtype": "tcp", 00:32:53.901 "traddr": "127.0.0.1", 00:32:53.901 "trsvcid": "4420" 00:32:53.901 }, 00:32:53.901 "method": "nvmf_subsystem_add_listener", 00:32:53.901 "req_id": 1 00:32:53.901 } 00:32:53.901 Got JSON-RPC error response 00:32:53.901 response: 00:32:53.901 { 00:32:53.901 "code": -32602, 00:32:53.901 "message": "Invalid parameters" 00:32:53.901 } 00:32:53.901 21:23:21 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:53.901 21:23:21 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:53.901 21:23:21 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:53.901 21:23:21 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:53.901 21:23:21 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:53.901 21:23:21 keyring_file -- keyring/file.sh@46 -- # bperfpid=2214124 00:32:53.901 21:23:21 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2214124 /var/tmp/bperf.sock 00:32:53.901 21:23:21 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2214124 ']' 00:32:53.901 21:23:21 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:53.902 21:23:21 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:53.902 21:23:21 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:53.902 21:23:21 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:53.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:53.902 21:23:21 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:53.902 21:23:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:53.902 [2024-07-15 21:23:21.113887] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:32:53.902 [2024-07-15 21:23:21.113935] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2214124 ] 00:32:53.902 EAL: No free 2048 kB hugepages reported on node 1 00:32:54.163 [2024-07-15 21:23:21.194059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.163 [2024-07-15 21:23:21.258670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:54.733 21:23:21 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:54.733 21:23:21 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:54.733 21:23:21 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.psaV3GZAB9 00:32:54.733 21:23:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.psaV3GZAB9 00:32:54.733 21:23:22 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Y3Fand6Fnc 00:32:54.733 21:23:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Y3Fand6Fnc 00:32:54.993 21:23:22 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:54.993 21:23:22 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:54.993 21:23:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:54.993 21:23:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:54.993 21:23:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:55.253 21:23:22 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.psaV3GZAB9 == \/\t\m\p\/\t\m\p\.\p\s\a\V\3\G\Z\A\B\9 ]] 00:32:55.253 21:23:22 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:55.253 21:23:22 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:55.253 21:23:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:55.253 21:23:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:55.253 21:23:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:55.253 21:23:22 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Y3Fand6Fnc == \/\t\m\p\/\t\m\p\.\Y\3\F\a\n\d\6\F\n\c ]] 00:32:55.253 21:23:22 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:55.253 21:23:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:55.253 21:23:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:55.253 21:23:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:55.253 21:23:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:55.253 21:23:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:55.513 21:23:22 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:55.513 21:23:22 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:55.513 21:23:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:55.513 21:23:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:55.513 21:23:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:55.513 21:23:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:55.513 21:23:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:55.774 21:23:22 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:55.774 21:23:22 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:55.774 21:23:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:55.774 [2024-07-15 21:23:22.959583] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:55.774 nvme0n1 00:32:55.774 21:23:23 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:55.774 21:23:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:55.774 21:23:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:55.774 21:23:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:55.774 21:23:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:55.774 21:23:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:56.034 21:23:23 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:56.034 21:23:23 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:56.034 21:23:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:56.034 21:23:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:56.034 21:23:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:56.034 21:23:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:56.034 21:23:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:56.294 21:23:23 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:56.294 21:23:23 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:56.294 Running I/O for 1 seconds... 00:32:57.233 00:32:57.233 Latency(us) 00:32:57.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.233 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:57.233 nvme0n1 : 1.01 10503.75 41.03 0.00 0.00 12111.82 7045.12 23592.96 00:32:57.233 =================================================================================================================== 00:32:57.233 Total : 10503.75 41.03 0.00 0.00 12111.82 7045.12 23592.96 00:32:57.233 0 00:32:57.233 21:23:24 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:57.233 21:23:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:57.493 21:23:24 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:57.493 21:23:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:57.493 21:23:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:57.493 21:23:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:57.493 21:23:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:57.493 21:23:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:57.753 21:23:24 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:57.753 21:23:24 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:57.753 21:23:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:57.753 21:23:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:57.753 21:23:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:57.753 21:23:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:57.753 21:23:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:57.753 21:23:24 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:57.753 21:23:24 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:57.753 21:23:24 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:57.753 21:23:24 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:57.753 21:23:24 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:57.753 21:23:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:57.753 21:23:24 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:57.753 21:23:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:57.753 21:23:24 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:57.753 21:23:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:58.015 [2024-07-15 21:23:25.128214] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:58.015 [2024-07-15 21:23:25.128787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18030f0 (107): Transport endpoint is not connected 00:32:58.015 [2024-07-15 21:23:25.129784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18030f0 (9): Bad file descriptor 00:32:58.015 [2024-07-15 21:23:25.130786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:58.015 [2024-07-15 21:23:25.130794] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:58.015 [2024-07-15 21:23:25.130799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:58.015 request: 00:32:58.015 { 00:32:58.015 "name": "nvme0", 00:32:58.015 "trtype": "tcp", 00:32:58.015 "traddr": "127.0.0.1", 00:32:58.015 "adrfam": "ipv4", 00:32:58.015 "trsvcid": "4420", 00:32:58.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:58.015 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:58.015 "prchk_reftag": false, 00:32:58.015 "prchk_guard": false, 00:32:58.015 "hdgst": false, 00:32:58.015 "ddgst": false, 00:32:58.015 "psk": "key1", 00:32:58.015 "method": "bdev_nvme_attach_controller", 00:32:58.015 "req_id": 1 00:32:58.015 } 00:32:58.015 Got JSON-RPC error response 00:32:58.015 response: 00:32:58.015 { 00:32:58.015 "code": -5, 00:32:58.015 "message": "Input/output error" 00:32:58.015 } 00:32:58.015 21:23:25 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:58.015 21:23:25 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:58.015 21:23:25 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:58.015 21:23:25 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:58.015 21:23:25 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:58.015 21:23:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:58.015 21:23:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:58.015 21:23:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:58.015 21:23:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:58.015 21:23:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:58.015 21:23:25 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:58.015 21:23:25 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:58.275 21:23:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:58.275 21:23:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:58.275 21:23:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:58.275 21:23:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:58.275 21:23:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:58.275 21:23:25 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:58.275 21:23:25 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:58.275 21:23:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:58.536 21:23:25 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:58.536 21:23:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:58.536 21:23:25 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:58.536 21:23:25 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:58.536 21:23:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:58.796 21:23:25 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:58.796 21:23:25 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.psaV3GZAB9 00:32:58.796 21:23:25 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.psaV3GZAB9 00:32:58.796 21:23:25 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:58.796 21:23:25 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.psaV3GZAB9 00:32:58.796 21:23:25 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:58.796 21:23:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:58.796 21:23:25 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:58.796 21:23:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:58.796 21:23:25 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.psaV3GZAB9 00:32:58.797 21:23:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.psaV3GZAB9 00:32:58.797 [2024-07-15 21:23:26.078626] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.psaV3GZAB9': 0100660 00:32:58.797 [2024-07-15 21:23:26.078646] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:58.797 request: 00:32:58.797 { 00:32:58.797 "name": "key0", 00:32:58.797 "path": "/tmp/tmp.psaV3GZAB9", 00:32:58.797 "method": "keyring_file_add_key", 00:32:58.797 "req_id": 1 00:32:58.797 } 00:32:58.797 Got JSON-RPC error response 00:32:58.797 response: 00:32:58.797 { 00:32:58.797 "code": -1, 00:32:58.797 "message": "Operation not permitted" 00:32:58.797 } 00:32:59.057 21:23:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:59.057 21:23:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:59.057 21:23:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:59.057 21:23:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:59.057 21:23:26 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.psaV3GZAB9 00:32:59.057 21:23:26 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.psaV3GZAB9 00:32:59.057 21:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.psaV3GZAB9 00:32:59.057 21:23:26 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.psaV3GZAB9 00:32:59.057 21:23:26 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:59.057 21:23:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:59.057 21:23:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:59.057 21:23:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:59.057 21:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:59.057 21:23:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:59.318 21:23:26 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:59.318 21:23:26 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:59.318 21:23:26 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:59.318 21:23:26 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:59.318 21:23:26 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:59.318 21:23:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:59.318 21:23:26 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:59.318 21:23:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:59.318 21:23:26 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:59.318 21:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:59.318 [2024-07-15 21:23:26.555841] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.psaV3GZAB9': No such file or directory 00:32:59.318 [2024-07-15 21:23:26.555856] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:59.318 [2024-07-15 21:23:26.555872] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:59.318 [2024-07-15 21:23:26.555876] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:59.318 [2024-07-15 21:23:26.555882] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:59.318 request: 00:32:59.318 { 00:32:59.318 "name": "nvme0", 00:32:59.318 "trtype": "tcp", 00:32:59.318 "traddr": "127.0.0.1", 00:32:59.318 "adrfam": "ipv4", 00:32:59.318 "trsvcid": "4420", 00:32:59.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:59.318 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:59.318 "prchk_reftag": false, 00:32:59.318 "prchk_guard": false, 00:32:59.318 "hdgst": false, 00:32:59.318 "ddgst": false, 00:32:59.318 "psk": "key0", 00:32:59.318 "method": "bdev_nvme_attach_controller", 00:32:59.318 "req_id": 1 00:32:59.318 } 00:32:59.318 Got JSON-RPC error response 00:32:59.318 response: 00:32:59.318 { 00:32:59.318 "code": -19, 00:32:59.318 "message": "No such device" 00:32:59.318 } 00:32:59.318 21:23:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:59.318 21:23:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:59.318 21:23:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:59.318 21:23:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:59.318 21:23:26 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:59.318 21:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:59.579 21:23:26 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:59.579 21:23:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:59.579 21:23:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:59.579 21:23:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:59.579 21:23:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:59.579 21:23:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:59.579 21:23:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.drbTbyipns 00:32:59.579 21:23:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:59.579 21:23:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:59.579 21:23:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:59.579 21:23:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:59.579 21:23:26 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:59.579 21:23:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:59.579 21:23:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:59.579 21:23:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.drbTbyipns 00:32:59.579 21:23:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.drbTbyipns 00:32:59.579 21:23:26 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.drbTbyipns 00:32:59.579 21:23:26 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.drbTbyipns 00:32:59.579 21:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.drbTbyipns 00:32:59.839 21:23:26 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:59.839 21:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:00.099 nvme0n1 00:33:00.099 21:23:27 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:33:00.099 21:23:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:00.099 21:23:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:00.100 21:23:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:00.100 21:23:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:00.100 21:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:00.100 21:23:27 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:00.100 21:23:27 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:00.100 21:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:00.360 21:23:27 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:33:00.360 21:23:27 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:33:00.360 21:23:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:00.360 21:23:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:00.360 21:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:00.360 21:23:27 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:00.360 21:23:27 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:33:00.360 21:23:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:00.360 21:23:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:00.360 21:23:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:00.360 21:23:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:00.360 21:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:00.621 21:23:27 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:00.621 21:23:27 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:00.621 21:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:00.881 21:23:27 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:00.881 21:23:27 keyring_file -- keyring/file.sh@104 -- # jq length 00:33:00.881 21:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:00.881 21:23:28 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:00.881 21:23:28 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.drbTbyipns 00:33:00.881 21:23:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.drbTbyipns 00:33:01.140 21:23:28 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Y3Fand6Fnc 00:33:01.140 21:23:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Y3Fand6Fnc 00:33:01.140 21:23:28 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:01.140 21:23:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:01.399 nvme0n1 00:33:01.399 21:23:28 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:01.399 21:23:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:01.660 21:23:28 keyring_file -- keyring/file.sh@112 -- # config='{ 00:33:01.660 "subsystems": [ 00:33:01.660 { 00:33:01.660 "subsystem": "keyring", 00:33:01.660 "config": [ 00:33:01.660 { 00:33:01.660 "method": "keyring_file_add_key", 00:33:01.660 "params": { 00:33:01.660 "name": "key0", 00:33:01.660 "path": "/tmp/tmp.drbTbyipns" 00:33:01.660 } 00:33:01.660 }, 00:33:01.660 { 00:33:01.660 "method": "keyring_file_add_key", 00:33:01.660 "params": { 00:33:01.660 "name": "key1", 00:33:01.660 "path": "/tmp/tmp.Y3Fand6Fnc" 00:33:01.660 } 00:33:01.660 } 00:33:01.660 ] 00:33:01.660 }, 00:33:01.660 { 00:33:01.660 "subsystem": "iobuf", 00:33:01.660 "config": [ 00:33:01.660 { 00:33:01.660 "method": "iobuf_set_options", 00:33:01.660 "params": { 00:33:01.660 "small_pool_count": 8192, 00:33:01.660 "large_pool_count": 1024, 00:33:01.660 "small_bufsize": 8192, 00:33:01.660 "large_bufsize": 135168 00:33:01.660 } 00:33:01.660 } 00:33:01.660 ] 00:33:01.660 }, 00:33:01.660 { 00:33:01.660 "subsystem": "sock", 00:33:01.660 "config": [ 00:33:01.660 { 00:33:01.660 "method": "sock_set_default_impl", 00:33:01.660 "params": { 00:33:01.660 "impl_name": "posix" 00:33:01.660 } 00:33:01.660 }, 00:33:01.660 { 00:33:01.660 "method": "sock_impl_set_options", 00:33:01.660 "params": { 00:33:01.660 "impl_name": "ssl", 00:33:01.660 "recv_buf_size": 4096, 00:33:01.660 "send_buf_size": 4096, 00:33:01.660 "enable_recv_pipe": true, 00:33:01.660 "enable_quickack": false, 00:33:01.660 "enable_placement_id": 0, 00:33:01.660 "enable_zerocopy_send_server": true, 00:33:01.660 "enable_zerocopy_send_client": false, 00:33:01.660 "zerocopy_threshold": 0, 00:33:01.660 "tls_version": 0, 00:33:01.660 "enable_ktls": false 00:33:01.660 } 00:33:01.660 }, 00:33:01.660 { 00:33:01.660 "method": "sock_impl_set_options", 00:33:01.660 "params": { 00:33:01.660 "impl_name": "posix", 00:33:01.660 "recv_buf_size": 2097152, 00:33:01.660 "send_buf_size": 2097152, 00:33:01.660 "enable_recv_pipe": true, 00:33:01.660 "enable_quickack": false, 00:33:01.660 "enable_placement_id": 0, 00:33:01.660 "enable_zerocopy_send_server": true, 00:33:01.660 "enable_zerocopy_send_client": false, 00:33:01.660 "zerocopy_threshold": 0, 00:33:01.660 "tls_version": 0, 00:33:01.660 "enable_ktls": false 00:33:01.660 } 00:33:01.660 } 00:33:01.660 ] 00:33:01.660 }, 00:33:01.660 { 00:33:01.660 "subsystem": "vmd", 00:33:01.660 "config": [] 00:33:01.660 }, 00:33:01.660 { 00:33:01.660 "subsystem": "accel", 00:33:01.660 "config": [ 00:33:01.660 { 00:33:01.660 "method": "accel_set_options", 00:33:01.660 "params": { 00:33:01.660 "small_cache_size": 128, 00:33:01.660 "large_cache_size": 16, 00:33:01.660 "task_count": 2048, 00:33:01.660 "sequence_count": 2048, 00:33:01.660 "buf_count": 2048 00:33:01.660 } 00:33:01.660 } 00:33:01.660 ] 00:33:01.660 }, 00:33:01.660 { 00:33:01.660 "subsystem": "bdev", 00:33:01.660 "config": [ 00:33:01.660 { 00:33:01.660 "method": "bdev_set_options", 00:33:01.660 "params": { 00:33:01.660 "bdev_io_pool_size": 65535, 00:33:01.660 "bdev_io_cache_size": 256, 00:33:01.660 "bdev_auto_examine": true, 00:33:01.660 "iobuf_small_cache_size": 128, 00:33:01.660 "iobuf_large_cache_size": 16 00:33:01.660 } 00:33:01.660 }, 00:33:01.660 { 00:33:01.660 "method": "bdev_raid_set_options", 00:33:01.660 "params": { 00:33:01.660 "process_window_size_kb": 1024 00:33:01.660 } 00:33:01.660 }, 00:33:01.660 { 00:33:01.660 "method": "bdev_iscsi_set_options", 00:33:01.660 "params": { 00:33:01.660 "timeout_sec": 30 00:33:01.660 } 00:33:01.660 }, 00:33:01.660 { 00:33:01.660 "method": "bdev_nvme_set_options", 00:33:01.660 "params": { 00:33:01.660 "action_on_timeout": "none", 00:33:01.660 "timeout_us": 0, 00:33:01.660 "timeout_admin_us": 0, 00:33:01.660 "keep_alive_timeout_ms": 10000, 00:33:01.660 "arbitration_burst": 0, 00:33:01.660 "low_priority_weight": 0, 00:33:01.660 "medium_priority_weight": 0, 00:33:01.660 "high_priority_weight": 0, 00:33:01.660 "nvme_adminq_poll_period_us": 10000, 00:33:01.660 "nvme_ioq_poll_period_us": 0, 00:33:01.660 "io_queue_requests": 512, 00:33:01.660 "delay_cmd_submit": true, 00:33:01.660 "transport_retry_count": 4, 00:33:01.660 "bdev_retry_count": 3, 00:33:01.660 "transport_ack_timeout": 0, 00:33:01.660 "ctrlr_loss_timeout_sec": 0, 00:33:01.660 "reconnect_delay_sec": 0, 00:33:01.660 "fast_io_fail_timeout_sec": 0, 00:33:01.660 "disable_auto_failback": false, 00:33:01.660 "generate_uuids": false, 00:33:01.660 "transport_tos": 0, 00:33:01.660 "nvme_error_stat": false, 00:33:01.660 "rdma_srq_size": 0, 00:33:01.660 "io_path_stat": false, 00:33:01.660 "allow_accel_sequence": false, 00:33:01.660 "rdma_max_cq_size": 0, 00:33:01.660 "rdma_cm_event_timeout_ms": 0, 00:33:01.660 "dhchap_digests": [ 00:33:01.660 "sha256", 00:33:01.660 "sha384", 00:33:01.660 "sha512" 00:33:01.660 ], 00:33:01.660 "dhchap_dhgroups": [ 00:33:01.660 "null", 00:33:01.660 "ffdhe2048", 00:33:01.660 "ffdhe3072", 00:33:01.660 "ffdhe4096", 00:33:01.660 "ffdhe6144", 00:33:01.660 "ffdhe8192" 00:33:01.660 ] 00:33:01.660 } 00:33:01.660 }, 00:33:01.660 { 00:33:01.660 "method": "bdev_nvme_attach_controller", 00:33:01.660 "params": { 00:33:01.660 "name": "nvme0", 00:33:01.660 "trtype": "TCP", 00:33:01.660 "adrfam": "IPv4", 00:33:01.660 "traddr": "127.0.0.1", 00:33:01.660 "trsvcid": "4420", 00:33:01.660 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:01.660 "prchk_reftag": false, 00:33:01.660 "prchk_guard": false, 00:33:01.660 "ctrlr_loss_timeout_sec": 0, 00:33:01.660 "reconnect_delay_sec": 0, 00:33:01.660 "fast_io_fail_timeout_sec": 0, 00:33:01.660 "psk": "key0", 00:33:01.660 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:01.660 "hdgst": false, 00:33:01.660 "ddgst": false 00:33:01.660 } 00:33:01.660 }, 00:33:01.660 { 00:33:01.660 "method": "bdev_nvme_set_hotplug", 00:33:01.660 "params": { 00:33:01.660 "period_us": 100000, 00:33:01.660 "enable": false 00:33:01.660 } 00:33:01.660 }, 00:33:01.660 { 00:33:01.660 "method": "bdev_wait_for_examine" 00:33:01.660 } 00:33:01.660 ] 00:33:01.660 }, 00:33:01.660 { 00:33:01.660 "subsystem": "nbd", 00:33:01.660 "config": [] 00:33:01.660 } 00:33:01.660 ] 00:33:01.660 }' 00:33:01.660 21:23:28 keyring_file -- keyring/file.sh@114 -- # killprocess 2214124 00:33:01.660 21:23:28 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2214124 ']' 00:33:01.660 21:23:28 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2214124 00:33:01.660 21:23:28 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:01.660 21:23:28 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:01.660 21:23:28 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2214124 00:33:01.660 21:23:28 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:01.660 21:23:28 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:01.660 21:23:28 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2214124' 00:33:01.660 killing process with pid 2214124 00:33:01.660 21:23:28 keyring_file -- common/autotest_common.sh@967 -- # kill 2214124 00:33:01.660 Received shutdown signal, test time was about 1.000000 seconds 00:33:01.660 00:33:01.660 Latency(us) 00:33:01.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.660 =================================================================================================================== 00:33:01.661 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:01.661 21:23:28 keyring_file -- common/autotest_common.sh@972 -- # wait 2214124 00:33:01.921 21:23:29 keyring_file -- keyring/file.sh@117 -- # bperfpid=2215638 00:33:01.921 21:23:29 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2215638 /var/tmp/bperf.sock 00:33:01.921 21:23:29 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2215638 ']' 00:33:01.921 21:23:29 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:01.921 21:23:29 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:01.921 21:23:29 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:01.921 21:23:29 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:01.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:01.921 21:23:29 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:01.921 21:23:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:01.921 21:23:29 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:33:01.921 "subsystems": [ 00:33:01.921 { 00:33:01.921 "subsystem": "keyring", 00:33:01.921 "config": [ 00:33:01.921 { 00:33:01.921 "method": "keyring_file_add_key", 00:33:01.921 "params": { 00:33:01.921 "name": "key0", 00:33:01.921 "path": "/tmp/tmp.drbTbyipns" 00:33:01.921 } 00:33:01.921 }, 00:33:01.921 { 00:33:01.921 "method": "keyring_file_add_key", 00:33:01.921 "params": { 00:33:01.921 "name": "key1", 00:33:01.921 "path": "/tmp/tmp.Y3Fand6Fnc" 00:33:01.921 } 00:33:01.921 } 00:33:01.921 ] 00:33:01.921 }, 00:33:01.921 { 00:33:01.921 "subsystem": "iobuf", 00:33:01.921 "config": [ 00:33:01.921 { 00:33:01.921 "method": "iobuf_set_options", 00:33:01.921 "params": { 00:33:01.921 "small_pool_count": 8192, 00:33:01.921 "large_pool_count": 1024, 00:33:01.921 "small_bufsize": 8192, 00:33:01.921 "large_bufsize": 135168 00:33:01.921 } 00:33:01.921 } 00:33:01.921 ] 00:33:01.921 }, 00:33:01.921 { 00:33:01.921 "subsystem": "sock", 00:33:01.921 "config": [ 00:33:01.921 { 00:33:01.921 "method": "sock_set_default_impl", 00:33:01.921 "params": { 00:33:01.921 "impl_name": "posix" 00:33:01.921 } 00:33:01.921 }, 00:33:01.921 { 00:33:01.921 "method": "sock_impl_set_options", 00:33:01.921 "params": { 00:33:01.921 "impl_name": "ssl", 00:33:01.921 "recv_buf_size": 4096, 00:33:01.921 "send_buf_size": 4096, 00:33:01.921 "enable_recv_pipe": true, 00:33:01.921 "enable_quickack": false, 00:33:01.921 "enable_placement_id": 0, 00:33:01.921 "enable_zerocopy_send_server": true, 00:33:01.921 "enable_zerocopy_send_client": false, 00:33:01.921 "zerocopy_threshold": 0, 00:33:01.921 "tls_version": 0, 00:33:01.921 "enable_ktls": false 00:33:01.921 } 00:33:01.921 }, 00:33:01.921 { 00:33:01.921 "method": "sock_impl_set_options", 00:33:01.921 "params": { 00:33:01.921 "impl_name": "posix", 00:33:01.921 "recv_buf_size": 2097152, 00:33:01.921 "send_buf_size": 2097152, 00:33:01.921 "enable_recv_pipe": true, 00:33:01.921 "enable_quickack": false, 00:33:01.921 "enable_placement_id": 0, 00:33:01.921 "enable_zerocopy_send_server": true, 00:33:01.921 "enable_zerocopy_send_client": false, 00:33:01.921 "zerocopy_threshold": 0, 00:33:01.921 "tls_version": 0, 00:33:01.921 "enable_ktls": false 00:33:01.921 } 00:33:01.921 } 00:33:01.921 ] 00:33:01.921 }, 00:33:01.921 { 00:33:01.921 "subsystem": "vmd", 00:33:01.921 "config": [] 00:33:01.921 }, 00:33:01.921 { 00:33:01.921 "subsystem": "accel", 00:33:01.921 "config": [ 00:33:01.921 { 00:33:01.921 "method": "accel_set_options", 00:33:01.921 "params": { 00:33:01.921 "small_cache_size": 128, 00:33:01.921 "large_cache_size": 16, 00:33:01.921 "task_count": 2048, 00:33:01.921 "sequence_count": 2048, 00:33:01.921 "buf_count": 2048 00:33:01.921 } 00:33:01.921 } 00:33:01.921 ] 00:33:01.921 }, 00:33:01.921 { 00:33:01.921 "subsystem": "bdev", 00:33:01.921 "config": [ 00:33:01.921 { 00:33:01.921 "method": "bdev_set_options", 00:33:01.921 "params": { 00:33:01.921 "bdev_io_pool_size": 65535, 00:33:01.921 "bdev_io_cache_size": 256, 00:33:01.921 "bdev_auto_examine": true, 00:33:01.922 "iobuf_small_cache_size": 128, 00:33:01.922 "iobuf_large_cache_size": 16 00:33:01.922 } 00:33:01.922 }, 00:33:01.922 { 00:33:01.922 "method": "bdev_raid_set_options", 00:33:01.922 "params": { 00:33:01.922 "process_window_size_kb": 1024 00:33:01.922 } 00:33:01.922 }, 00:33:01.922 { 00:33:01.922 "method": "bdev_iscsi_set_options", 00:33:01.922 "params": { 00:33:01.922 "timeout_sec": 30 00:33:01.922 } 00:33:01.922 }, 00:33:01.922 { 00:33:01.922 "method": "bdev_nvme_set_options", 00:33:01.922 "params": { 00:33:01.922 "action_on_timeout": "none", 00:33:01.922 "timeout_us": 0, 00:33:01.922 "timeout_admin_us": 0, 00:33:01.922 "keep_alive_timeout_ms": 10000, 00:33:01.922 "arbitration_burst": 0, 00:33:01.922 "low_priority_weight": 0, 00:33:01.922 "medium_priority_weight": 0, 00:33:01.922 "high_priority_weight": 0, 00:33:01.922 "nvme_adminq_poll_period_us": 10000, 00:33:01.922 "nvme_ioq_poll_period_us": 0, 00:33:01.922 "io_queue_requests": 512, 00:33:01.922 "delay_cmd_submit": true, 00:33:01.922 "transport_retry_count": 4, 00:33:01.922 "bdev_retry_count": 3, 00:33:01.922 "transport_ack_timeout": 0, 00:33:01.922 "ctrlr_loss_timeout_sec": 0, 00:33:01.922 "reconnect_delay_sec": 0, 00:33:01.922 "fast_io_fail_timeout_sec": 0, 00:33:01.922 "disable_auto_failback": false, 00:33:01.922 "generate_uuids": false, 00:33:01.922 "transport_tos": 0, 00:33:01.922 "nvme_error_stat": false, 00:33:01.922 "rdma_srq_size": 0, 00:33:01.922 "io_path_stat": false, 00:33:01.922 "allow_accel_sequence": false, 00:33:01.922 "rdma_max_cq_size": 0, 00:33:01.922 "rdma_cm_event_timeout_ms": 0, 00:33:01.922 "dhchap_digests": [ 00:33:01.922 "sha256", 00:33:01.922 "sha384", 00:33:01.922 "sha512" 00:33:01.922 ], 00:33:01.922 "dhchap_dhgroups": [ 00:33:01.922 "null", 00:33:01.922 "ffdhe2048", 00:33:01.922 "ffdhe3072", 00:33:01.922 "ffdhe4096", 00:33:01.922 "ffdhe6144", 00:33:01.922 "ffdhe8192" 00:33:01.922 ] 00:33:01.922 } 00:33:01.922 }, 00:33:01.922 { 00:33:01.922 "method": "bdev_nvme_attach_controller", 00:33:01.922 "params": { 00:33:01.922 "name": "nvme0", 00:33:01.922 "trtype": "TCP", 00:33:01.922 "adrfam": "IPv4", 00:33:01.922 "traddr": "127.0.0.1", 00:33:01.922 "trsvcid": "4420", 00:33:01.922 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:01.922 "prchk_reftag": false, 00:33:01.922 "prchk_guard": false, 00:33:01.922 "ctrlr_loss_timeout_sec": 0, 00:33:01.922 "reconnect_delay_sec": 0, 00:33:01.922 "fast_io_fail_timeout_sec": 0, 00:33:01.922 "psk": "key0", 00:33:01.922 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:01.922 "hdgst": false, 00:33:01.922 "ddgst": false 00:33:01.922 } 00:33:01.922 }, 00:33:01.922 { 00:33:01.922 "method": "bdev_nvme_set_hotplug", 00:33:01.922 "params": { 00:33:01.922 "period_us": 100000, 00:33:01.922 "enable": false 00:33:01.922 } 00:33:01.922 }, 00:33:01.922 { 00:33:01.922 "method": "bdev_wait_for_examine" 00:33:01.922 } 00:33:01.922 ] 00:33:01.922 }, 00:33:01.922 { 00:33:01.922 "subsystem": "nbd", 00:33:01.922 "config": [] 00:33:01.922 } 00:33:01.922 ] 00:33:01.922 }' 00:33:01.922 [2024-07-15 21:23:29.069665] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:33:01.922 [2024-07-15 21:23:29.069723] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2215638 ] 00:33:01.922 EAL: No free 2048 kB hugepages reported on node 1 00:33:01.922 [2024-07-15 21:23:29.148464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.922 [2024-07-15 21:23:29.202071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:02.182 [2024-07-15 21:23:29.344114] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:02.754 21:23:29 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:02.754 21:23:29 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:02.754 21:23:29 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:02.754 21:23:29 keyring_file -- keyring/file.sh@120 -- # jq length 00:33:02.754 21:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:02.754 21:23:29 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:02.754 21:23:29 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:33:02.754 21:23:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:02.754 21:23:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:02.754 21:23:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:02.754 21:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:02.754 21:23:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:03.014 21:23:30 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:03.014 21:23:30 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:33:03.014 21:23:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:03.014 21:23:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:03.014 21:23:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:03.014 21:23:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:03.014 21:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:03.274 21:23:30 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:03.274 21:23:30 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:03.274 21:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:03.274 21:23:30 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:03.274 21:23:30 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:03.274 21:23:30 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:03.274 21:23:30 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.drbTbyipns /tmp/tmp.Y3Fand6Fnc 00:33:03.274 21:23:30 keyring_file -- keyring/file.sh@20 -- # killprocess 2215638 00:33:03.274 21:23:30 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2215638 ']' 00:33:03.274 21:23:30 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2215638 00:33:03.274 21:23:30 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:03.274 21:23:30 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:03.274 21:23:30 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2215638 00:33:03.274 21:23:30 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:03.274 21:23:30 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:03.274 21:23:30 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2215638' 00:33:03.274 killing process with pid 2215638 00:33:03.274 21:23:30 keyring_file -- common/autotest_common.sh@967 -- # kill 2215638 00:33:03.274 Received shutdown signal, test time was about 1.000000 seconds 00:33:03.274 00:33:03.274 Latency(us) 00:33:03.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.274 =================================================================================================================== 00:33:03.274 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:03.274 21:23:30 keyring_file -- common/autotest_common.sh@972 -- # wait 2215638 00:33:03.534 21:23:30 keyring_file -- keyring/file.sh@21 -- # killprocess 2213904 00:33:03.534 21:23:30 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2213904 ']' 00:33:03.534 21:23:30 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2213904 00:33:03.534 21:23:30 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:03.534 21:23:30 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:03.534 21:23:30 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2213904 00:33:03.534 21:23:30 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:03.534 21:23:30 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:03.534 21:23:30 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2213904' 00:33:03.534 killing process with pid 2213904 00:33:03.534 21:23:30 keyring_file -- common/autotest_common.sh@967 -- # kill 2213904 00:33:03.534 [2024-07-15 21:23:30.696894] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:03.534 21:23:30 keyring_file -- common/autotest_common.sh@972 -- # wait 2213904 00:33:03.794 00:33:03.795 real 0m10.988s 00:33:03.795 user 0m25.914s 00:33:03.795 sys 0m2.538s 00:33:03.795 21:23:30 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:03.795 21:23:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:03.795 ************************************ 00:33:03.795 END TEST keyring_file 00:33:03.795 ************************************ 00:33:03.795 21:23:30 -- common/autotest_common.sh@1142 -- # return 0 00:33:03.795 21:23:30 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:33:03.795 21:23:30 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:03.795 21:23:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:03.795 21:23:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:03.795 21:23:30 -- common/autotest_common.sh@10 -- # set +x 00:33:03.795 ************************************ 00:33:03.795 START TEST keyring_linux 00:33:03.795 ************************************ 00:33:03.795 21:23:30 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:03.795 * Looking for test storage... 00:33:04.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:04.056 21:23:31 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:04.056 21:23:31 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:04.056 21:23:31 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:04.056 21:23:31 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:04.056 21:23:31 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:04.056 21:23:31 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.056 21:23:31 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.056 21:23:31 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.056 21:23:31 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:04.056 21:23:31 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:04.056 21:23:31 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:04.056 21:23:31 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:04.056 21:23:31 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:04.057 21:23:31 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:04.057 21:23:31 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:04.057 21:23:31 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:04.057 21:23:31 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:04.057 21:23:31 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:04.057 21:23:31 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:04.057 21:23:31 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:04.057 21:23:31 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:04.057 21:23:31 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:04.057 21:23:31 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:04.057 21:23:31 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:04.057 21:23:31 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:04.057 21:23:31 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:04.057 21:23:31 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:04.057 21:23:31 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:04.057 21:23:31 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:04.057 21:23:31 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:04.057 21:23:31 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:04.057 21:23:31 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:04.057 /tmp/:spdk-test:key0 00:33:04.057 21:23:31 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:04.057 21:23:31 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:04.057 21:23:31 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:04.057 21:23:31 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:04.057 21:23:31 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:04.057 21:23:31 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:04.057 21:23:31 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:04.057 21:23:31 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:04.057 21:23:31 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:04.057 21:23:31 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:04.057 21:23:31 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:04.057 21:23:31 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:04.057 21:23:31 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:04.057 21:23:31 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:04.057 21:23:31 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:04.057 /tmp/:spdk-test:key1 00:33:04.057 21:23:31 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2216220 00:33:04.057 21:23:31 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2216220 00:33:04.057 21:23:31 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:04.057 21:23:31 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2216220 ']' 00:33:04.057 21:23:31 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:04.057 21:23:31 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:04.057 21:23:31 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:04.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:04.057 21:23:31 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:04.057 21:23:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:04.057 [2024-07-15 21:23:31.267832] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:33:04.057 [2024-07-15 21:23:31.267888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216220 ] 00:33:04.057 EAL: No free 2048 kB hugepages reported on node 1 00:33:04.057 [2024-07-15 21:23:31.335270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.318 [2024-07-15 21:23:31.400084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.892 21:23:32 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:04.892 21:23:32 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:04.892 21:23:32 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:04.892 21:23:32 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.892 21:23:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:04.892 [2024-07-15 21:23:32.018964] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:04.892 null0 00:33:04.892 [2024-07-15 21:23:32.051004] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:04.892 [2024-07-15 21:23:32.051393] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:04.892 21:23:32 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.892 21:23:32 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:04.892 589990599 00:33:04.892 21:23:32 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:04.892 91435788 00:33:04.892 21:23:32 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2216373 00:33:04.892 21:23:32 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2216373 /var/tmp/bperf.sock 00:33:04.892 21:23:32 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:04.892 21:23:32 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2216373 ']' 00:33:04.892 21:23:32 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:04.892 21:23:32 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:04.892 21:23:32 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:04.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:04.892 21:23:32 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:04.892 21:23:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:04.892 [2024-07-15 21:23:32.138479] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:33:04.892 [2024-07-15 21:23:32.138529] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216373 ] 00:33:04.892 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.152 [2024-07-15 21:23:32.216181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.152 [2024-07-15 21:23:32.269567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.725 21:23:32 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:05.725 21:23:32 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:05.725 21:23:32 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:05.725 21:23:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:05.985 21:23:33 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:05.985 21:23:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:05.985 21:23:33 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:05.985 21:23:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:06.246 [2024-07-15 21:23:33.361221] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:06.246 nvme0n1 00:33:06.246 21:23:33 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:06.246 21:23:33 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:06.246 21:23:33 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:06.246 21:23:33 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:06.246 21:23:33 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:06.246 21:23:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:06.507 21:23:33 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:06.507 21:23:33 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:06.507 21:23:33 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:06.507 21:23:33 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:06.507 21:23:33 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:06.507 21:23:33 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:06.507 21:23:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:06.507 21:23:33 keyring_linux -- keyring/linux.sh@25 -- # sn=589990599 00:33:06.507 21:23:33 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:06.507 21:23:33 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:06.507 21:23:33 keyring_linux -- keyring/linux.sh@26 -- # [[ 589990599 == \5\8\9\9\9\0\5\9\9 ]] 00:33:06.507 21:23:33 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 589990599 00:33:06.507 21:23:33 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:06.507 21:23:33 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:06.768 Running I/O for 1 seconds... 00:33:07.822 00:33:07.822 Latency(us) 00:33:07.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.822 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:07.822 nvme0n1 : 1.01 11036.29 43.11 0.00 0.00 11525.35 9284.27 19333.12 00:33:07.822 =================================================================================================================== 00:33:07.822 Total : 11036.29 43.11 0.00 0.00 11525.35 9284.27 19333.12 00:33:07.822 0 00:33:07.822 21:23:34 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:07.822 21:23:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:07.822 21:23:35 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:07.822 21:23:35 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:07.822 21:23:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:07.822 21:23:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:07.822 21:23:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:07.822 21:23:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:08.082 21:23:35 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:08.082 21:23:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:08.082 21:23:35 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:08.082 21:23:35 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:08.082 21:23:35 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:33:08.082 21:23:35 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:08.082 21:23:35 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:08.082 21:23:35 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:08.082 21:23:35 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:08.082 21:23:35 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:08.082 21:23:35 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:08.082 21:23:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:08.082 [2024-07-15 21:23:35.363369] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:08.082 [2024-07-15 21:23:35.364110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85a0b0 (107): Transport endpoint is not connected 00:33:08.082 [2024-07-15 21:23:35.365107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85a0b0 (9): Bad file descriptor 00:33:08.082 [2024-07-15 21:23:35.366109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:08.082 [2024-07-15 21:23:35.366116] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:08.082 [2024-07-15 21:23:35.366122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:08.082 request: 00:33:08.082 { 00:33:08.082 "name": "nvme0", 00:33:08.082 "trtype": "tcp", 00:33:08.082 "traddr": "127.0.0.1", 00:33:08.082 "adrfam": "ipv4", 00:33:08.082 "trsvcid": "4420", 00:33:08.082 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:08.082 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:08.082 "prchk_reftag": false, 00:33:08.082 "prchk_guard": false, 00:33:08.082 "hdgst": false, 00:33:08.082 "ddgst": false, 00:33:08.082 "psk": ":spdk-test:key1", 00:33:08.082 "method": "bdev_nvme_attach_controller", 00:33:08.082 "req_id": 1 00:33:08.082 } 00:33:08.082 Got JSON-RPC error response 00:33:08.082 response: 00:33:08.082 { 00:33:08.082 "code": -5, 00:33:08.082 "message": "Input/output error" 00:33:08.082 } 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:08.343 21:23:35 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:08.343 21:23:35 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:08.343 21:23:35 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:08.343 21:23:35 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:08.343 21:23:35 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:08.343 21:23:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:08.343 21:23:35 keyring_linux -- keyring/linux.sh@33 -- # sn=589990599 00:33:08.343 21:23:35 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 589990599 00:33:08.343 1 links removed 00:33:08.343 21:23:35 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:08.343 21:23:35 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:08.343 21:23:35 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:08.343 21:23:35 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:08.343 21:23:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:08.343 21:23:35 keyring_linux -- keyring/linux.sh@33 -- # sn=91435788 00:33:08.343 21:23:35 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 91435788 00:33:08.343 1 links removed 00:33:08.343 21:23:35 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2216373 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2216373 ']' 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2216373 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2216373 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2216373' 00:33:08.343 killing process with pid 2216373 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@967 -- # kill 2216373 00:33:08.343 Received shutdown signal, test time was about 1.000000 seconds 00:33:08.343 00:33:08.343 Latency(us) 00:33:08.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:08.343 =================================================================================================================== 00:33:08.343 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@972 -- # wait 2216373 00:33:08.343 21:23:35 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2216220 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2216220 ']' 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2216220 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2216220 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2216220' 00:33:08.343 killing process with pid 2216220 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@967 -- # kill 2216220 00:33:08.343 21:23:35 keyring_linux -- common/autotest_common.sh@972 -- # wait 2216220 00:33:08.603 00:33:08.603 real 0m4.838s 00:33:08.603 user 0m8.385s 00:33:08.603 sys 0m1.433s 00:33:08.603 21:23:35 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:08.603 21:23:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:08.603 ************************************ 00:33:08.603 END TEST keyring_linux 00:33:08.603 ************************************ 00:33:08.603 21:23:35 -- common/autotest_common.sh@1142 -- # return 0 00:33:08.603 21:23:35 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:33:08.603 21:23:35 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:08.603 21:23:35 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:08.603 21:23:35 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:33:08.603 21:23:35 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:33:08.603 21:23:35 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:33:08.603 21:23:35 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:08.603 21:23:35 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:08.603 21:23:35 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:08.603 21:23:35 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:08.603 21:23:35 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:08.603 21:23:35 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:08.603 21:23:35 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:08.603 21:23:35 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:08.603 21:23:35 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:08.603 21:23:35 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:33:08.603 21:23:35 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:33:08.603 21:23:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:08.603 21:23:35 -- common/autotest_common.sh@10 -- # set +x 00:33:08.603 21:23:35 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:33:08.603 21:23:35 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:08.603 21:23:35 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:08.603 21:23:35 -- common/autotest_common.sh@10 -- # set +x 00:33:16.743 INFO: APP EXITING 00:33:16.743 INFO: killing all VMs 00:33:16.743 INFO: killing vhost app 00:33:16.743 INFO: EXIT DONE 00:33:20.045 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:33:20.045 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:33:20.045 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:33:20.045 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:33:20.045 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:33:20.045 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:33:20.045 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:33:20.045 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:33:20.045 0000:65:00.0 (144d a80a): Already using the nvme driver 00:33:20.045 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:33:20.045 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:33:20.306 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:33:20.306 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:33:20.306 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:33:20.306 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:33:20.306 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:33:20.306 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:33:24.513 Cleaning 00:33:24.513 Removing: /var/run/dpdk/spdk0/config 00:33:24.513 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:24.513 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:24.513 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:24.513 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:24.513 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:24.513 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:24.513 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:24.513 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:24.513 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:24.513 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:24.513 Removing: /var/run/dpdk/spdk1/config 00:33:24.513 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:24.513 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:24.513 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:24.513 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:24.513 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:24.513 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:24.513 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:24.513 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:24.513 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:24.513 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:24.513 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:24.513 Removing: /var/run/dpdk/spdk2/config 00:33:24.513 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:24.513 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:24.513 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:24.513 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:24.513 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:24.513 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:24.513 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:24.513 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:24.513 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:24.513 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:24.513 Removing: /var/run/dpdk/spdk3/config 00:33:24.513 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:24.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:24.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:24.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:24.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:24.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:24.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:24.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:24.514 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:24.514 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:24.514 Removing: /var/run/dpdk/spdk4/config 00:33:24.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:24.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:24.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:24.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:24.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:24.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:24.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:24.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:24.514 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:24.514 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:24.514 Removing: /dev/shm/bdev_svc_trace.1 00:33:24.514 Removing: /dev/shm/nvmf_trace.0 00:33:24.514 Removing: /dev/shm/spdk_tgt_trace.pid1731461 00:33:24.514 Removing: /var/run/dpdk/spdk0 00:33:24.514 Removing: /var/run/dpdk/spdk1 00:33:24.514 Removing: /var/run/dpdk/spdk2 00:33:24.514 Removing: /var/run/dpdk/spdk3 00:33:24.514 Removing: /var/run/dpdk/spdk4 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1729920 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1731461 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1731980 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1733139 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1733356 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1734588 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1734757 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1735098 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1736006 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1736773 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1737136 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1737402 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1737680 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1738034 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1738392 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1738741 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1739025 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1740187 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1743452 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1743807 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1744168 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1744379 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1744851 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1744883 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1745440 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1745600 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1745958 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1745997 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1746334 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1746442 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1747014 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1747155 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1747530 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1747896 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1747923 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1748120 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1748345 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1748694 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1749044 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1749397 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1749620 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1749815 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1750138 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1750485 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1750834 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1751211 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1751457 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1751688 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1752038 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1752385 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1752741 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1753167 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1753585 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1753934 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1754282 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1754632 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1754703 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1755106 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1760096 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1817533 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1823216 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1835417 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1842403 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1847813 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1848703 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1856327 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1864192 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1864194 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1865307 00:33:24.514 Removing: /var/run/dpdk/spdk_pid1866498 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1867762 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1868437 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1868522 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1868774 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1869013 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1869105 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1870108 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1871116 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1872132 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1872803 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1872807 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1873144 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1874569 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1875958 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1886356 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1886827 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1892410 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1899810 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1902903 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1916693 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1928463 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1930552 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1931708 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1953691 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1958727 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1990603 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1996474 00:33:24.775 Removing: /var/run/dpdk/spdk_pid1998367 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2000489 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2000828 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2001016 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2001197 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2001907 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2003915 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2004991 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2005516 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2008075 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2008784 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2009668 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2015593 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2028727 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2033541 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2041005 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2042501 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2044122 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2049883 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2055283 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2065383 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2065391 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2071307 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2071672 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2072009 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2072418 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2072528 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2078555 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2079246 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2085085 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2088174 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2095167 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2102208 00:33:24.775 Removing: /var/run/dpdk/spdk_pid2112641 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2121755 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2121759 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2146279 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2146959 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2147711 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2148514 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2149416 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2150176 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2150923 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2151678 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2157189 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2157511 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2165220 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2165584 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2168115 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2176329 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2176341 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2183430 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2185641 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2188058 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2189339 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2191861 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2193111 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2203914 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2204582 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2205193 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2208024 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2208661 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2209329 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2213904 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2214124 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2215638 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2216220 00:33:25.036 Removing: /var/run/dpdk/spdk_pid2216373 00:33:25.036 Clean 00:33:25.036 21:23:52 -- common/autotest_common.sh@1451 -- # return 0 00:33:25.036 21:23:52 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:33:25.036 21:23:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:25.036 21:23:52 -- common/autotest_common.sh@10 -- # set +x 00:33:25.036 21:23:52 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:33:25.036 21:23:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:25.036 21:23:52 -- common/autotest_common.sh@10 -- # set +x 00:33:25.297 21:23:52 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:25.297 21:23:52 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:25.297 21:23:52 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:25.297 21:23:52 -- spdk/autotest.sh@391 -- # hash lcov 00:33:25.297 21:23:52 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:25.297 21:23:52 -- spdk/autotest.sh@393 -- # hostname 00:33:25.297 21:23:52 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:25.297 geninfo: WARNING: invalid characters removed from testname! 00:33:51.871 21:24:16 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:52.131 21:24:19 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:54.676 21:24:21 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:56.060 21:24:23 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:57.444 21:24:24 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:59.358 21:24:26 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:00.741 21:24:27 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:00.741 21:24:27 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:00.741 21:24:27 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:00.741 21:24:27 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:00.741 21:24:27 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:00.741 21:24:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.741 21:24:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.741 21:24:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.741 21:24:27 -- paths/export.sh@5 -- $ export PATH 00:34:00.741 21:24:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.741 21:24:27 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:00.741 21:24:27 -- common/autobuild_common.sh@444 -- $ date +%s 00:34:00.741 21:24:27 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721071467.XXXXXX 00:34:00.741 21:24:27 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721071467.5jKbLm 00:34:00.741 21:24:27 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:34:00.741 21:24:27 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:34:00.741 21:24:27 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:00.741 21:24:27 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:00.741 21:24:27 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:00.741 21:24:27 -- common/autobuild_common.sh@460 -- $ get_config_params 00:34:00.741 21:24:27 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:34:00.742 21:24:27 -- common/autotest_common.sh@10 -- $ set +x 00:34:00.742 21:24:27 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:34:00.742 21:24:27 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:34:00.742 21:24:27 -- pm/common@17 -- $ local monitor 00:34:00.742 21:24:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:00.742 21:24:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:00.742 21:24:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:00.742 21:24:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:00.742 21:24:27 -- pm/common@21 -- $ date +%s 00:34:00.742 21:24:27 -- pm/common@25 -- $ sleep 1 00:34:00.742 21:24:27 -- pm/common@21 -- $ date +%s 00:34:00.742 21:24:27 -- pm/common@21 -- $ date +%s 00:34:00.742 21:24:28 -- pm/common@21 -- $ date +%s 00:34:00.742 21:24:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721071468 00:34:00.742 21:24:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721071468 00:34:00.742 21:24:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721071468 00:34:00.742 21:24:28 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721071468 00:34:01.003 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721071468_collect-vmstat.pm.log 00:34:01.003 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721071468_collect-cpu-temp.pm.log 00:34:01.003 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721071468_collect-cpu-load.pm.log 00:34:01.003 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721071468_collect-bmc-pm.bmc.pm.log 00:34:01.955 21:24:29 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:34:01.955 21:24:29 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:34:01.955 21:24:29 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:01.955 21:24:29 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:01.955 21:24:29 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:01.955 21:24:29 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:01.955 21:24:29 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:01.955 21:24:29 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:01.955 21:24:29 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:01.955 21:24:29 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:01.955 21:24:29 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:01.955 21:24:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:01.955 21:24:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:01.955 21:24:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:01.955 21:24:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:01.955 21:24:29 -- pm/common@44 -- $ pid=2229695 00:34:01.955 21:24:29 -- pm/common@50 -- $ kill -TERM 2229695 00:34:01.955 21:24:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:01.955 21:24:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:01.955 21:24:29 -- pm/common@44 -- $ pid=2229696 00:34:01.955 21:24:29 -- pm/common@50 -- $ kill -TERM 2229696 00:34:01.955 21:24:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:01.955 21:24:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:01.955 21:24:29 -- pm/common@44 -- $ pid=2229698 00:34:01.955 21:24:29 -- pm/common@50 -- $ kill -TERM 2229698 00:34:01.955 21:24:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:01.955 21:24:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:01.955 21:24:29 -- pm/common@44 -- $ pid=2229721 00:34:01.955 21:24:29 -- pm/common@50 -- $ sudo -E kill -TERM 2229721 00:34:01.955 + [[ -n 1605777 ]] 00:34:01.955 + sudo kill 1605777 00:34:01.966 [Pipeline] } 00:34:01.986 [Pipeline] // stage 00:34:01.992 [Pipeline] } 00:34:02.011 [Pipeline] // timeout 00:34:02.017 [Pipeline] } 00:34:02.036 [Pipeline] // catchError 00:34:02.041 [Pipeline] } 00:34:02.058 [Pipeline] // wrap 00:34:02.065 [Pipeline] } 00:34:02.080 [Pipeline] // catchError 00:34:02.088 [Pipeline] stage 00:34:02.091 [Pipeline] { (Epilogue) 00:34:02.106 [Pipeline] catchError 00:34:02.108 [Pipeline] { 00:34:02.124 [Pipeline] echo 00:34:02.126 Cleanup processes 00:34:02.133 [Pipeline] sh 00:34:02.426 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:02.426 2229801 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:02.426 2230244 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:02.442 [Pipeline] sh 00:34:02.730 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:02.730 ++ grep -v 'sudo pgrep' 00:34:02.730 ++ awk '{print $1}' 00:34:02.730 + sudo kill -9 2229801 00:34:02.744 [Pipeline] sh 00:34:03.029 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:15.310 [Pipeline] sh 00:34:15.605 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:15.605 Artifacts sizes are good 00:34:15.623 [Pipeline] archiveArtifacts 00:34:15.631 Archiving artifacts 00:34:15.822 [Pipeline] sh 00:34:16.121 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:16.142 [Pipeline] cleanWs 00:34:16.149 [WS-CLEANUP] Deleting project workspace... 00:34:16.149 [WS-CLEANUP] Deferred wipeout is used... 00:34:16.155 [WS-CLEANUP] done 00:34:16.157 [Pipeline] } 00:34:16.173 [Pipeline] // catchError 00:34:16.183 [Pipeline] sh 00:34:16.467 + logger -p user.info -t JENKINS-CI 00:34:16.500 [Pipeline] } 00:34:16.514 [Pipeline] // stage 00:34:16.519 [Pipeline] } 00:34:16.534 [Pipeline] // node 00:34:16.538 [Pipeline] End of Pipeline 00:34:16.556 Finished: SUCCESS